modelId
stringlengths
5
122
author
stringlengths
2
42
last_modified
unknown
downloads
int64
0
738M
likes
int64
0
11k
library_name
stringclasses
245 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
48 values
createdAt
unknown
card
stringlengths
1
901k
PrunaAI/Doctor-Shotgun-TinyLlama-1.1B-32k-Instruct-AWQ-4bit-smashed
PrunaAI
"2024-06-23T15:38:33Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "pruna-ai", "conversational", "base_model:Doctor-Shotgun/TinyLlama-1.1B-32k-Instruct", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "awq", "region:us" ]
text-generation
"2024-06-23T15:38:05Z"
--- thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg" base_model: Doctor-Shotgun/TinyLlama-1.1B-32k-Instruct metrics: - memory_disk - memory_inference - inference_latency - inference_throughput - inference_CO2_emissions - inference_energy_consumption tags: - pruna-ai --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer"> <img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/CP4VSgck) # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed with awq. - ***How does the model quality change?*** The quality of the model output might vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on HARDWARE_NAME with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We use safetensors. - ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check requirements from the original repo Doctor-Shotgun/TinyLlama-1.1B-32k-Instruct installed. In particular, check python, cuda, and transformers versions. 1. Make sure that you have installed quantization related packages. ```bash pip install autoawq ``` 2. Load & run the model. ```python from transformers import AutoModelForCausalLM, AutoTokenizer from awq import AutoAWQForCausalLM model = AutoAWQForCausalLM.from_quantized("PrunaAI/Doctor-Shotgun-TinyLlama-1.1B-32k-Instruct-AWQ-4bit-smashed", trust_remote_code=True, device_map='auto') tokenizer = AutoTokenizer.from_pretrained("Doctor-Shotgun/TinyLlama-1.1B-32k-Instruct") input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"] outputs = model.generate(input_ids, max_new_tokens=216) tokenizer.decode(outputs[0]) ``` ## Configurations The configuration info are in `smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model Doctor-Shotgun/TinyLlama-1.1B-32k-Instruct before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
PrabhakarVenkat/ICR-Identifying-Age-Related-Conditions
PrabhakarVenkat
"2024-06-23T15:49:11Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T15:38:28Z"
# ICR - Identifying Age-Related Conditions ![](https://github.com/prabhakarvenkat/ICR---Identifying-Age-Related-Conditions/blob/76d681d95058d209f809ce1e31d1a8e769334132/Screenshot%20(34).png) Goal of the Competition: The goal of this competition is to predict if a person has any of three medical conditions. You are being asked to predict if the person has one or more of any of the three medical conditions (Class 1), or none of the three medical conditions (Class 0). You will create a model trained on measurements of health characteristics. To determine if someone has these medical conditions requires a long and intrusive process to collect information from patients. With predictive models, we can shorten this process and keep patient details private by collecting key characteristics relative to the conditions, then encoding these characteristics. Your work will help researchers discover the relationship between measurements of certain characteristics and potential patient conditions. Context: They say age is just a number but a whole host of health issues come with aging. From heart disease and dementia to hearing loss and arthritis, aging is a risk factor for numerous diseases and complications. The growing field of bioinformatics includes research into interventions that can help slow and reverse biological aging and prevent major age-related ailments. Data science could have a role to play in developing new methods to solve problems with diverse data, even if the number of samples is small. Currently, models like XGBoost and random forest are used to predict medical conditions yet the models' performance is not good enough. Dealing with critical problems where lives are on the line, models need to make correct predictions reliably and consistently between different cases. Founded in 2015, competition host InVitro Cell Research, LLC (ICR) is a privately funded company focused on regenerative and preventive personalized medicine. Their offices and labs in the greater New York City area offer state-of-the-art research space. InVitro Cell Research's Scientists are what set them apart, helping guide and defining their mission of researching how to repair aging people fast. In this competition, you’ll work with measurements of health characteristic data to solve critical problems in bioinformatics. Based on minimal training, you’ll create a model to predict if a person has any of three medical conditions, with an aim to improve on existing methods. You could help advance the growing field of bioinformatics and explore new methods to solve complex problems with diverse data. ## Acknowledgements - [KAGGLE](https://www.kaggle.com/competitions/icr-identify-age-related-conditions) ## Appendix This is a Kaggle competition. ## Author - [PRABHAKAR V](https://github.com/prabhakarvenkat) ## Screenshots from Kaggle ![](https://github.com/prabhakarvenkat/ICR---Identifying-Age-Related-Conditions/blob/76d681d95058d209f809ce1e31d1a8e769334132/Screenshot%20(35).png) ## Used By This project is used by the following company: - InVitro Cell Research
Ginmaique/gmaique
Ginmaique
"2024-06-23T15:43:26Z"
0
0
null
[ "license:apache-2.0", "region:us" ]
null
"2024-06-23T15:43:26Z"
--- license: apache-2.0 ---
katenkoy/calfin
katenkoy
"2024-06-23T15:44:34Z"
0
0
segmentation-models-pytorch
[ "segmentation-models-pytorch", "safetensors", "semantic-segmentation", "pytorch", "image-segmentation", "license:mit", "region:us" ]
image-segmentation
"2024-06-23T15:44:23Z"
--- library_name: segmentation-models-pytorch license: mit pipeline_tag: image-segmentation tags: - semantic-segmentation - pytorch - segmentation-models-pytorch languages: - python --- # FPN Model Card Table of Contents: - [Load trained model](#load-trained-model) - [Model init parameters](#model-init-parameters) - [Model metrics](#model-metrics) - [Dataset](#dataset) ## Load trained model ```python import segmentation_models_pytorch as smp model = smp.FPN.from_pretrained("calfin") ``` ## Model init parameters ```python model_init_params = { "encoder_name": "resnet34", "encoder_depth": 5, "encoder_weights": "imagenet", "decoder_pyramid_channels": 256, "decoder_segmentation_channels": 128, "decoder_merge_policy": "add", "decoder_dropout": 0.2, "in_channels": 3, "classes": 1, "activation": None, "upsampling": 4, "aux_params": None } ``` ## Model metrics ```json [ { "test_per_image_iou": 0.7881532311439514, "test_dataset_iou": 0.7835019826889038 } ] ``` ## Dataset Dataset name: CALFIN ## More Information - Library: https://github.com/qubvel/segmentation_models.pytorch - Docs: https://smp.readthedocs.io/en/latest/ This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin)
PrunaAI/simecek-cswikimistral_0.1-AWQ-4bit-smashed
PrunaAI
"2024-06-23T15:46:33Z"
0
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "pruna-ai", "base_model:simecek/cswikimistral_0.1", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "awq", "region:us" ]
text-generation
"2024-06-23T15:44:45Z"
--- thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg" base_model: simecek/cswikimistral_0.1 metrics: - memory_disk - memory_inference - inference_latency - inference_throughput - inference_CO2_emissions - inference_energy_consumption tags: - pruna-ai --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer"> <img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/CP4VSgck) # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed with awq. - ***How does the model quality change?*** The quality of the model output might vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on HARDWARE_NAME with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We use safetensors. - ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check requirements from the original repo simecek/cswikimistral_0.1 installed. In particular, check python, cuda, and transformers versions. 1. Make sure that you have installed quantization related packages. ```bash pip install autoawq ``` 2. Load & run the model. ```python from transformers import AutoModelForCausalLM, AutoTokenizer from awq import AutoAWQForCausalLM model = AutoAWQForCausalLM.from_quantized("PrunaAI/simecek-cswikimistral_0.1-AWQ-4bit-smashed", trust_remote_code=True, device_map='auto') tokenizer = AutoTokenizer.from_pretrained("simecek/cswikimistral_0.1") input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"] outputs = model.generate(input_ids, max_new_tokens=216) tokenizer.decode(outputs[0]) ``` ## Configurations The configuration info are in `smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model simecek/cswikimistral_0.1 before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
PrunaAI/PORTULAN-gervasio-7b-portuguese-ptbr-decoder-AWQ-4bit-smashed
PrunaAI
"2024-06-23T15:46:40Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "pruna-ai", "base_model:PORTULAN/gervasio-7b-portuguese-ptbr-decoder", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "awq", "region:us" ]
text-generation
"2024-06-23T15:44:46Z"
--- thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg" base_model: PORTULAN/gervasio-7b-portuguese-ptbr-decoder metrics: - memory_disk - memory_inference - inference_latency - inference_throughput - inference_CO2_emissions - inference_energy_consumption tags: - pruna-ai --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer"> <img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/CP4VSgck) # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed with awq. - ***How does the model quality change?*** The quality of the model output might vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on HARDWARE_NAME with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We use safetensors. - ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check requirements from the original repo PORTULAN/gervasio-7b-portuguese-ptbr-decoder installed. In particular, check python, cuda, and transformers versions. 1. Make sure that you have installed quantization related packages. ```bash pip install autoawq ``` 2. Load & run the model. ```python from transformers import AutoModelForCausalLM, AutoTokenizer from awq import AutoAWQForCausalLM model = AutoAWQForCausalLM.from_quantized("PrunaAI/PORTULAN-gervasio-7b-portuguese-ptbr-decoder-AWQ-4bit-smashed", trust_remote_code=True, device_map='auto') tokenizer = AutoTokenizer.from_pretrained("PORTULAN/gervasio-7b-portuguese-ptbr-decoder") input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"] outputs = model.generate(input_ids, max_new_tokens=216) tokenizer.decode(outputs[0]) ``` ## Configurations The configuration info are in `smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model PORTULAN/gervasio-7b-portuguese-ptbr-decoder before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
TatevK/fineTuned_Model
TatevK
"2024-06-24T11:13:58Z"
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-06-23T15:45:53Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
pasithbas159/langtai_llm_study
pasithbas159
"2024-06-23T15:50:58Z"
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:openthaigpt/openthaigpt-1.0.0-beta-7b-chat-ckpt-hf", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-23T15:50:51Z"
--- base_model: openthaigpt/openthaigpt-1.0.0-beta-7b-chat-ckpt-hf language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl --- # Uploaded model - **Developed by:** pasithbas159 - **License:** apache-2.0 - **Finetuned from model :** openthaigpt/openthaigpt-1.0.0-beta-7b-chat-ckpt-hf This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
PrabhakarVenkat/Numpy_Basics
PrabhakarVenkat
"2024-06-23T15:52:23Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T15:51:33Z"
# Numpy_Basics ![logo](https://github.com/prabhakarvenkat/Numpy_Basics/blob/4784ae0b600d35a1fe12375a41b840ce37bb7643/numpy.png) <h2>What is NumPy?</h2> <p>NumPy is a Python library used for working with arrays. It also has functions for working in domain of linear algebra, fourier transform, and matrices. NumPy was created in 2005 by Travis Oliphant. It is an open source project and you can use it freely. NumPy stands for Numerical Python.</p> -------------------------------------------------------------------------------------------------------------------------------------------------- <h2>Why Use NumPy?</h2> <p>In Python we have lists that serve the purpose of arrays, but they are slow to process. NumPy aims to provide an array object that is up to 50x faster than traditional Python lists. The array object in NumPy is called ndarray, it provides a lot of supporting functions that make working with ndarray very easy. Arrays are very frequently used in data science, where speed and resources are very important.</p> -------------------------------------------------------------------------------------------------------------------------------------------------- <h2>Why is NumPy Faster Than Lists?</h2> <p>NumPy arrays are stored at one continuous place in memory unlike lists, so processes can access and manipulate them very efficiently. This behavior is called locality of reference in computer science. This is the main reason why NumPy is faster than lists. Also it is optimized to work with latest CPU architectures.</p> -------------------------------------------------------------------------------------------------------------------------------------------------- <h2>Which Language is NumPy written in?</h2> <p>NumPy is a Python library and is written partially in Python, but most of the parts that require fast computation are written in C or C++.</p>
PrabhakarVenkat/Pandas_Basics
PrabhakarVenkat
"2024-06-23T15:54:01Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T15:53:29Z"
# Pandas_Basic ![logo](https://github.com/prabhakarvenkat/Pandas_Basic/blob/b72da381b67e6cae96b1829135d83d9fdcdf6bd1/pandas.jpg) <h2>What is Pandas?</h2> <p>Pandas is a Python library used for working with data sets. It has functions for analyzing, cleaning, exploring, and manipulating data. The name "Pandas" has a reference to both "Panel Data", and "Python Data Analysis" and was created by Wes McKinney in 2008.</p> -------------------------------------------------------------------------------------------------------------------------------------- <h2>Why Use Pandas?</h2> <p>Pandas allows us to analyze big data and make conclusions based on statistical theories. Pandas can clean messy data sets, and make them readable and relevant. Relevant data is very important in data science.</p> -------------------------------------------------------------------------------------------------------------------------------------- <h2>What Can Pandas Do?</h2> <p>Pandas gives you answers about the data. Like: Is there a correlation between two or more columns? What is average value? Max value? Min value? Pandas are also able to delete rows that are not relevant, or contains wrong values, like empty or NULL values. This is called cleaning the data.</p>
PrabhakarVenkat/Matplotlib_Basics
PrabhakarVenkat
"2024-06-23T15:55:42Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T15:55:25Z"
# Matplotlib_Basics ![logo](https://github.com/prabhakarvenkat/Matplotlib_Basics/blob/63651e5fca73e9114ae478c86f782de43ad75991/matplot_title_logo.png) <h2>What is Matplotlib?</h2> <p>Matplotlib is a low level graph plotting library in python that serves as a visualization utility. Matplotlib was created by John D. Hunter. Matplotlib is open source and we can use it freely. Matplotlib is mostly written in python, a few segments are written in C, Objective-C and Javascript for Platform compatibility.</p>
PrunaAI/codellama-CodeLlama-13b-hf-AWQ-4bit-smashed
PrunaAI
"2024-06-23T15:58:58Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "pruna-ai", "base_model:codellama/CodeLlama-13b-hf", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "awq", "region:us" ]
text-generation
"2024-06-23T15:55:50Z"
--- thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg" base_model: codellama/CodeLlama-13b-hf metrics: - memory_disk - memory_inference - inference_latency - inference_throughput - inference_CO2_emissions - inference_energy_consumption tags: - pruna-ai --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer"> <img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/CP4VSgck) # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed with awq. - ***How does the model quality change?*** The quality of the model output might vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on HARDWARE_NAME with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We use safetensors. - ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check requirements from the original repo codellama/CodeLlama-13b-hf installed. In particular, check python, cuda, and transformers versions. 1. Make sure that you have installed quantization related packages. ```bash pip install autoawq ``` 2. Load & run the model. ```python from transformers import AutoModelForCausalLM, AutoTokenizer from awq import AutoAWQForCausalLM model = AutoAWQForCausalLM.from_quantized("PrunaAI/codellama-CodeLlama-13b-hf-AWQ-4bit-smashed", trust_remote_code=True, device_map='auto') tokenizer = AutoTokenizer.from_pretrained("codellama/CodeLlama-13b-hf") input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"] outputs = model.generate(input_ids, max_new_tokens=216) tokenizer.decode(outputs[0]) ``` ## Configurations The configuration info are in `smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model codellama/CodeLlama-13b-hf before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
hajimr80/Movie_Genre_Classifier
hajimr80
"2024-06-23T15:58:49Z"
0
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-06-23T15:55:54Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
PrunaAI/UnicomLLM-Unichat-llama3-Chinese-8B-28K-AWQ-4bit-smashed
PrunaAI
"2024-06-23T15:58:35Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "pruna-ai", "conversational", "base_model:UnicomLLM/Unichat-llama3-Chinese-8B-28K", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "awq", "region:us" ]
text-generation
"2024-06-23T15:55:58Z"
--- thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg" base_model: UnicomLLM/Unichat-llama3-Chinese-8B-28K metrics: - memory_disk - memory_inference - inference_latency - inference_throughput - inference_CO2_emissions - inference_energy_consumption tags: - pruna-ai --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer"> <img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/CP4VSgck) # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed with awq. - ***How does the model quality change?*** The quality of the model output might vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on HARDWARE_NAME with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We use safetensors. - ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check requirements from the original repo UnicomLLM/Unichat-llama3-Chinese-8B-28K installed. In particular, check python, cuda, and transformers versions. 1. Make sure that you have installed quantization related packages. ```bash pip install autoawq ``` 2. Load & run the model. ```python from transformers import AutoModelForCausalLM, AutoTokenizer from awq import AutoAWQForCausalLM model = AutoAWQForCausalLM.from_quantized("PrunaAI/UnicomLLM-Unichat-llama3-Chinese-8B-28K-AWQ-4bit-smashed", trust_remote_code=True, device_map='auto') tokenizer = AutoTokenizer.from_pretrained("UnicomLLM/Unichat-llama3-Chinese-8B-28K") input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"] outputs = model.generate(input_ids, max_new_tokens=216) tokenizer.decode(outputs[0]) ``` ## Configurations The configuration info are in `smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model UnicomLLM/Unichat-llama3-Chinese-8B-28K before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
PrabhakarVenkat/PRABHAKAR_PORTFOLIO
PrabhakarVenkat
"2024-06-23T15:57:07Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T15:56:32Z"
# PRABHAKAR_PORTFOLIO
48xrf/corina
48xrf
"2024-06-23T16:00:06Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T15:59:43Z"
Entry not found
PrabhakarVenkat/Flask_Basic
PrabhakarVenkat
"2024-06-23T16:01:13Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T16:00:05Z"
# Flask_Basic ![logo](https://github.com/prabhakarvenkat/Flask_Basic/blob/94bd3b3d6d8f4736188bff6dcead36b086ad8f5a/flask.png) This repo has been updated to work with `Python v3.8` and up. ## How To Run 1. Install `virtualenv`: ``` $ pip install virtualenv ``` 2. Open a terminal in the project root directory and run: ``` $ virtualenv env ``` 3. Then run the command: ``` $ .\env\Scripts\activate ``` 4. Then install the dependencies: ``` $ (env) pip install -r requirements.txt ``` 5. Finally start the web server: ``` $ (env) python app.py ``` This server will start on port 5000 by default. You can change this in `app.py` by changing the following line to this: ```python if __name__ == "__main__": app.run(debug=True, port=<desired port>) ``` ## Contributing Since this is a repository for an introduction, the code should remain the same as the code that was shown in the repository. Any pull requests that don't address security flaws or fixes for language updates will be automatically closed. Style changes, adding libraries, etc are not valid changes for submitting a pull request. Thank you.
PrunaAI/alfredplpl-Llama-3-8B-Instruct-Ja-AWQ-4bit-smashed
PrunaAI
"2024-06-23T16:03:57Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "pruna-ai", "conversational", "base_model:alfredplpl/Llama-3-8B-Instruct-Ja", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "awq", "region:us" ]
text-generation
"2024-06-23T16:01:23Z"
--- thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg" base_model: alfredplpl/Llama-3-8B-Instruct-Ja metrics: - memory_disk - memory_inference - inference_latency - inference_throughput - inference_CO2_emissions - inference_energy_consumption tags: - pruna-ai --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer"> <img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/CP4VSgck) # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed with awq. - ***How does the model quality change?*** The quality of the model output might vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on HARDWARE_NAME with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We use safetensors. - ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check requirements from the original repo alfredplpl/Llama-3-8B-Instruct-Ja installed. In particular, check python, cuda, and transformers versions. 1. Make sure that you have installed quantization related packages. ```bash pip install autoawq ``` 2. Load & run the model. ```python from transformers import AutoModelForCausalLM, AutoTokenizer from awq import AutoAWQForCausalLM model = AutoAWQForCausalLM.from_quantized("PrunaAI/alfredplpl-Llama-3-8B-Instruct-Ja-AWQ-4bit-smashed", trust_remote_code=True, device_map='auto') tokenizer = AutoTokenizer.from_pretrained("alfredplpl/Llama-3-8B-Instruct-Ja") input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"] outputs = model.generate(input_ids, max_new_tokens=216) tokenizer.decode(outputs[0]) ``` ## Configurations The configuration info are in `smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model alfredplpl/Llama-3-8B-Instruct-Ja before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
hariharan1/detr-resnet-50-hardhat-finetuned
hariharan1
"2024-06-23T16:24:15Z"
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "detr", "object-detection", "endpoints_compatible", "region:us" ]
object-detection
"2024-06-23T16:01:54Z"
Entry not found
ProElectro07/PatDocMod
ProElectro07
"2024-06-23T16:03:21Z"
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-06-23T16:03:01Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
PrunaAI/mychen76-mistral7b_ocr_to_json_v1-AWQ-4bit-smashed
PrunaAI
"2024-06-23T16:05:08Z"
0
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "pruna-ai", "base_model:mychen76/mistral7b_ocr_to_json_v1", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "awq", "region:us" ]
text-generation
"2024-06-23T16:03:19Z"
--- thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg" base_model: mychen76/mistral7b_ocr_to_json_v1 metrics: - memory_disk - memory_inference - inference_latency - inference_throughput - inference_CO2_emissions - inference_energy_consumption tags: - pruna-ai --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer"> <img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/CP4VSgck) # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed with awq. - ***How does the model quality change?*** The quality of the model output might vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on HARDWARE_NAME with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We use safetensors. - ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check requirements from the original repo mychen76/mistral7b_ocr_to_json_v1 installed. In particular, check python, cuda, and transformers versions. 1. Make sure that you have installed quantization related packages. ```bash pip install autoawq ``` 2. Load & run the model. ```python from transformers import AutoModelForCausalLM, AutoTokenizer from awq import AutoAWQForCausalLM model = AutoAWQForCausalLM.from_quantized("PrunaAI/mychen76-mistral7b_ocr_to_json_v1-AWQ-4bit-smashed", trust_remote_code=True, device_map='auto') tokenizer = AutoTokenizer.from_pretrained("mychen76/mistral7b_ocr_to_json_v1") input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"] outputs = model.generate(input_ids, max_new_tokens=216) tokenizer.decode(outputs[0]) ``` ## Configurations The configuration info are in `smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model mychen76/mistral7b_ocr_to_json_v1 before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
berrykim/berrykimv2
berrykim
"2024-06-23T16:03:51Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T16:03:51Z"
Entry not found
ljnlonoljpiljm/enriched_docci_paligemma_mps_finetune_v1
ljnlonoljpiljm
"2024-06-23T16:06:50Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T16:06:50Z"
Entry not found
yukiwuki/distilroberta-base-finetuned-wikitext2
yukiwuki
"2024-06-23T17:39:05Z"
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "roberta", "fill-mask", "generated_from_trainer", "base_model:distilroberta-base", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
"2024-06-23T16:07:15Z"
--- license: apache-2.0 base_model: distilroberta-base tags: - generated_from_trainer model-index: - name: distilroberta-base-finetuned-wikitext2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilroberta-base-finetuned-wikitext2 This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.8595 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.082 | 1.0 | 2406 | 1.9339 | | 1.9833 | 2.0 | 4812 | 1.8850 | | 1.9422 | 3.0 | 7218 | 1.8349 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
PrunaAI/TheDrummer-Moistral-11B-v2-AWQ-4bit-smashed
PrunaAI
"2024-06-23T16:10:48Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "pruna-ai", "base_model:TheDrummer/Moistral-11B-v2", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "awq", "region:us" ]
text-generation
"2024-06-23T16:07:51Z"
--- thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg" base_model: TheDrummer/Moistral-11B-v2 metrics: - memory_disk - memory_inference - inference_latency - inference_throughput - inference_CO2_emissions - inference_energy_consumption tags: - pruna-ai --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer"> <img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/CP4VSgck) # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed with awq. - ***How does the model quality change?*** The quality of the model output might vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on HARDWARE_NAME with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We use safetensors. - ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check requirements from the original repo TheDrummer/Moistral-11B-v2 installed. In particular, check python, cuda, and transformers versions. 1. Make sure that you have installed quantization related packages. ```bash pip install autoawq ``` 2. Load & run the model. ```python from transformers import AutoModelForCausalLM, AutoTokenizer from awq import AutoAWQForCausalLM model = AutoAWQForCausalLM.from_quantized("PrunaAI/TheDrummer-Moistral-11B-v2-AWQ-4bit-smashed", trust_remote_code=True, device_map='auto') tokenizer = AutoTokenizer.from_pretrained("TheDrummer/Moistral-11B-v2") input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"] outputs = model.generate(input_ids, max_new_tokens=216) tokenizer.decode(outputs[0]) ``` ## Configurations The configuration info are in `smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model TheDrummer/Moistral-11B-v2 before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
idk7070707/jkghei5u48trj9g
idk7070707
"2024-06-23T16:10:03Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T16:08:59Z"
Entry not found
vinevixx/midjourney-falcon-7b
vinevixx
"2024-06-23T16:14:49Z"
0
0
null
[ "license:openrail", "region:us" ]
null
"2024-06-23T16:09:28Z"
--- license: openrail ---
izalnur/suck_on_that
izalnur
"2024-06-23T16:10:38Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T16:10:09Z"
Entry not found
vinevixx/falcon
vinevixx
"2024-06-23T16:36:46Z"
0
0
null
[ "license:openrail", "region:us" ]
null
"2024-06-23T16:14:32Z"
--- license: openrail ---
rafatsiddiqui/Meta-Llama-3-8B-SST-FineTune
rafatsiddiqui
"2024-06-25T10:52:40Z"
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-23T16:18:26Z"
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl base_model: unsloth/llama-3-8b-bnb-4bit --- # Uploaded model - **Developed by:** rafatsiddiqui - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
itisarainyday/llama3-ft-test
itisarainyday
"2024-06-23T16:21:01Z"
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-06-23T16:19:10Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
isaac06/isaac
isaac06
"2024-06-23T16:21:36Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T16:21:36Z"
Entry not found
Sriinfy/21BK1A6693
Sriinfy
"2024-06-23T16:23:48Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T16:23:47Z"
Entry not found
izalnur/ahegao
izalnur
"2024-06-23T16:26:00Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T16:25:35Z"
Entry not found
elladeandra/sports-prediction
elladeandra
"2024-06-23T16:39:45Z"
0
0
null
[ "license:unlicense", "region:us" ]
null
"2024-06-23T16:25:52Z"
--- license: unlicense ---
surya-narayanan/other
surya-narayanan
"2024-06-24T00:16:40Z"
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-06-23T16:26:43Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ppatade/whisper-largev3-hi
ppatade
"2024-06-23T16:27:34Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T16:27:33Z"
Entry not found
panxinyang/Qwen-Qwen1.5-7B-1719160237
panxinyang
"2024-06-23T16:30:40Z"
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:Qwen/Qwen1.5-7B", "region:us" ]
null
"2024-06-23T16:30:37Z"
--- base_model: Qwen/Qwen1.5-7B library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.11.1
sudhanshu746/Deepseek-math-7B-finetuned-mk
sudhanshu746
"2024-06-23T16:33:25Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T16:33:24Z"
Entry not found
inetnuc/nuclear_model_standard
inetnuc
"2024-06-23T16:35:06Z"
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-23T16:34:52Z"
--- base_model: unsloth/llama-3-8b-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl --- # Uploaded model - **Developed by:** inetnuc - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Sriinfy/21BK1A6693GenModel
Sriinfy
"2024-06-23T16:35:03Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T16:35:03Z"
Entry not found
dfndr11/llama-2-7b-climate-change-finetune
dfndr11
"2024-06-23T16:38:20Z"
0
0
null
[ "safetensors", "optimum_habana", "region:us" ]
null
"2024-06-23T16:35:59Z"
This model was finetuned on the climate-change-qna dataset here: https://huggingface.co/datasets/dfndr11/climate-change-qna This model was created for the QuizzicalAI submission to the Berkeley AI 2024 Hackathon. It was fine-tuned using the Intel Gaudi 2 machines. https://devpost.com/software/quizzicalai
hellwolfsh123/12345
hellwolfsh123
"2024-06-23T16:36:15Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T16:36:15Z"
Entry not found
Akseltinfat/opus-tatoeba-en-zgh
Akseltinfat
"2024-06-23T16:37:37Z"
0
0
null
[ "license:apache-2.0", "region:us" ]
null
"2024-06-23T16:37:37Z"
--- license: apache-2.0 ---
liminerity/Bitnet-Mistral.0.21
liminerity
"2024-06-23T16:37:38Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T16:37:38Z"
Entry not found
manbeast3b/KinoInferTry12
manbeast3b
"2024-06-23T16:39:54Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T16:39:48Z"
Entry not found
itisarainyday/llama-3-7b-ft-merged-v1
itisarainyday
"2024-06-23T16:43:38Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-06-23T16:39:50Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
cxfajar197/finetuned_transformer
cxfajar197
"2024-06-23T16:40:52Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T16:40:52Z"
Entry not found
Adaugiza/vocal
Adaugiza
"2024-06-23T16:43:28Z"
0
0
null
[ "license:openrail", "region:us" ]
null
"2024-06-23T16:42:50Z"
--- license: openrail ---
dash8060/kimberry
dash8060
"2024-06-23T16:56:03Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T16:42:59Z"
Entry not found
panxinyang/Qwen-Qwen1.5-0.5B-1719160983
panxinyang
"2024-06-23T16:43:05Z"
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:Qwen/Qwen1.5-0.5B", "region:us" ]
null
"2024-06-23T16:43:03Z"
--- base_model: Qwen/Qwen1.5-0.5B library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.11.1
lqbin/videberta-xsmall_batchsize8_epoch10
lqbin
"2024-06-23T16:43:06Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T16:43:06Z"
Entry not found
lqbin/videberta-xsmall_batchsize16_epoch10
lqbin
"2024-06-23T16:44:15Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T16:44:15Z"
Entry not found
RemishMinz/Enlighten_Instruct
RemishMinz
"2024-06-23T16:45:09Z"
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:mistralai/Mistral-7B-Instruct-v0.2", "region:us" ]
null
"2024-06-23T16:44:52Z"
--- base_model: mistralai/Mistral-7B-Instruct-v0.2 library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.11.1
erwannd/llava-finetuning-demo
erwannd
"2024-06-23T17:40:44Z"
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-06-23T16:45:08Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
lqbin/videberta-xsmall_batchsize32_epoch10
lqbin
"2024-06-23T16:47:15Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T16:47:15Z"
Entry not found
Thirawarit/supershiro-b2-coco-q2-Image-Captioning-large
Thirawarit
"2024-06-23T16:57:44Z"
0
0
transformers
[ "transformers", "safetensors", "blip-2", "visual-question-answering", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
visual-question-answering
"2024-06-23T16:47:27Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
RajuThesis/GPT2_RLHF_FCEData2
RajuThesis
"2024-06-23T16:47:37Z"
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-06-23T16:47:32Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
lqbin/videberta-xsmall_batchsize24_epoch10
lqbin
"2024-06-23T16:47:54Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T16:47:54Z"
Entry not found
liukarlie/forecast
liukarlie
"2024-06-23T16:48:40Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T16:48:40Z"
Entry not found
starnet/21-star21-06-23-01
starnet
"2024-06-23T16:56:28Z"
0
0
null
[ "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
null
"2024-06-23T16:49:57Z"
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
Tecnologya/julia
Tecnologya
"2024-06-23T16:51:54Z"
0
0
null
[ "license:openrail", "region:us" ]
null
"2024-06-23T16:50:52Z"
--- license: openrail ---
Tecnologya/cris
Tecnologya
"2024-06-23T16:52:59Z"
0
0
null
[ "license:openrail", "region:us" ]
null
"2024-06-23T16:52:17Z"
--- license: openrail ---
savage1221/your_model_name
savage1221
"2024-06-24T14:29:07Z"
0
0
null
[ "license:mit", "region:us" ]
null
"2024-06-23T16:55:41Z"
--- license: mit ---
Mithun162001/food_classifier
Mithun162001
"2024-06-23T17:31:12Z"
0
0
transformers
[ "transformers", "tf", "vit", "image-classification", "generated_from_keras_callback", "base_model:google/vit-base-patch16-224-in21k", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
"2024-06-23T16:59:53Z"
--- license: apache-2.0 base_model: google/vit-base-patch16-224-in21k tags: - generated_from_keras_callback model-index: - name: Mithun162001/food_classifier results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Mithun162001/food_classifier This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.3749 - Validation Loss: 0.3678 - Train Accuracy: 0.912 - Epoch: 4 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 20000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Accuracy | Epoch | |:----------:|:---------------:|:--------------:|:-----:| | 2.7571 | 1.6496 | 0.814 | 0 | | 1.2022 | 0.8020 | 0.909 | 1 | | 0.7036 | 0.5592 | 0.895 | 2 | | 0.4919 | 0.4119 | 0.911 | 3 | | 0.3749 | 0.3678 | 0.912 | 4 | ### Framework versions - Transformers 4.41.2 - TensorFlow 2.15.0 - Datasets 2.20.0 - Tokenizers 0.19.1
Flamenco43/legal-ner
Flamenco43
"2024-06-23T17:02:34Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T17:02:34Z"
Entry not found
SuketuS/mpnet-base-all-nli-triplet_sys
SuketuS
"2024-06-23T17:03:42Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T17:03:42Z"
Entry not found
AlexXIA007/model1
AlexXIA007
"2024-06-23T17:08:24Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T17:08:24Z"
Entry not found
RyotaKadoya1993/phi3_translator
RyotaKadoya1993
"2024-06-23T17:14:21Z"
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "mistral", "trl", "en", "base_model:unsloth/phi-3-medium-4k-instruct-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-23T17:09:54Z"
--- base_model: unsloth/phi-3-medium-4k-instruct-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - mistral - trl --- # Uploaded model - **Developed by:** RyotaKadoya1993 - **License:** apache-2.0 - **Finetuned from model :** unsloth/phi-3-medium-4k-instruct-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
ppatade/whisper-small-hi
ppatade
"2024-06-23T17:09:56Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T17:09:56Z"
Entry not found
tinyrolls/vilt_finetuned_dlmatsuo
tinyrolls
"2024-06-23T17:12:44Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T17:12:44Z"
Entry not found
lqbin/videberta-xsmall_batchsize12_epoch10
lqbin
"2024-06-23T17:15:30Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T17:15:30Z"
Entry not found
borismartirosyan/modern_marble_sculptures_rvxl
borismartirosyan
"2024-06-23T17:39:08Z"
0
0
null
[ "license:gpl", "region:us" ]
null
"2024-06-23T17:16:11Z"
--- license: gpl ---
Pragati-y/nemo
Pragati-y
"2024-06-23T17:16:11Z"
0
0
null
[ "license:llama3", "region:us" ]
null
"2024-06-23T17:16:11Z"
--- license: llama3 ---
kpoo12345/ch
kpoo12345
"2024-06-23T17:22:52Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T17:22:10Z"
import gradio as gr import pandas as pd import re def load_data(file_path): return pd.read_excel(file_path) def extract_courses(data, start_label, column_index): temp = data[data.iloc[:, column_index] == start_label].index[0] course_index = data[data.iloc[:, column_index] == start_label].index.tolist() if len(course_index) > 0: course_index = [index for index in course_index if index > temp] if course_index: course_index = course_index[-1] else: course_index = len(data) else: course_index = len(data) data_subset = data.iloc[temp + 2: course_index] filtered_data = data_subset[data_subset.iloc[:, 4] != 'F'] return filtered_data.iloc[:, column_index].astype(str).tolist() def check_courses(student_courses, required_courses): UnGradu_Major = [] for major in required_courses: if not any(str(major) in str(my_courses) for my_courses in student_courses): UnGradu_Major.append(major) return UnGradu_Major def calculate_major_credits(majors_data): Majors_data = [item for item in majors_data if isinstance(item, str) and ('전(선)' in item or '전필' in item)] Majors_numbers = [int(re.search(r'\d+', item).group()) for item in Majors_data if re.search(r'\d+', item)] return sum(Majors_numbers) def get_value(df, row, col, dtype=int): value = df.iloc[row, col] if pd.isna(value): return 0 if dtype == int else 0.0 return dtype(value) def check_graduation_areas(data): temp = data[data.iloc[:, 1] == '공필/일선/교필/교선/교직'].index[0] Culture_choice_index = data[data.iloc[:, 1] == '공필/일선/교필/교선/교직'].index.tolist() if len(Culture_choice_index) > 0: Culture_choice_index = [index for index in Culture_choice_index if index > temp] if Culture_choice_index: Culture_choice_index = Culture_choice_index[-1] else: Culture_choice_index = len(data) else: Culture_choice_index = len(data) data_subset = data.iloc[temp + 2: Culture_choice_index] filtered_data = data_subset[data_subset.iloc[:, 4] != 'F'] My_Culture_choice = filtered_data.iloc[:, 1].tolist() if '공필/일선/교필/교선/교직' in My_Culture_choice: My_Culture_choice.remove('공필/일선/교필/교선/교직') Culture_choice = {} for item in My_Culture_choice: if isinstance(item, str): if '창의와통섭' in item or '소통및윤리적행동' in item or '글로컬시민' in item or '자기개발과지식탐구' in item or '교선' in item: name, value = item.rsplit(maxsplit=1) Culture_choice[name.strip()] = int(value) chang_greater_than_2 = Culture_choice.get('창의와통섭', 0) >= 2 sootong_greater_than_2 = Culture_choice.get('소통및윤리적행동', 0) >= 2 glo_greater_than_2 = Culture_choice.get('글로컬시민', 0) >= 2 jagi_greater_than_2 = Culture_choice.get('자기개발과지식탐구', 0) >= 2 Gu_greater_than_19 = Culture_choice.get('교선', 0) >= 19 return { '창의와 통섭': chang_greater_than_2, '소통 및 윤리적 행동': sootong_greater_than_2, '글로컬 시민': glo_greater_than_2, '자기개발과 지식 탐구': jagi_greater_than_2, '교양선택': Gu_greater_than_19 } def process_excel(file): data = load_data(file.name) # 전공 과목 추출 My_Majors = extract_courses(data, '전필/전공(이수필/선택이수)', 6) Majors = ['철학과 신학', '헬라어Ⅰ', '히브리어Ⅰ', '전공탐색과 진로설계Ⅰ'] UnGradu_Major = check_courses(My_Majors, Majors) # 전공 학점 계산 total_major_credits = calculate_major_credits(My_Majors) # 교양 과목 추출 My_Culture = extract_courses(data, '공필/일선/교필/교선/교직', 1) Culture = ['구약의세계와인성', '신약의세계와섬김', '창의와비판적사고', '개혁주의신앙윤리', '토론·발표와글쓰기', '글쓰기Ⅰ', 'Global EnglishⅠ', 'Global EnglishⅡ', 'NCS직업기초능력'] UnGradu_Culture = check_courses(My_Culture, Culture) # '토론·발표와글쓰기'와 '글쓰기Ⅰ' 조건 처리 if '토론·발표와글쓰기' not in UnGradu_Culture: if '글쓰기Ⅰ' in UnGradu_Culture: UnGradu_Culture.remove('글쓰기Ⅰ') elif '글쓰기Ⅰ' not in UnGradu_Culture: if '토론·발표와글쓰기' in UnGradu_Culture: UnGradu_Culture.remove('토론·발표와글쓰기') # 공통 영역 과목 추출 My_Gongtong = extract_courses(data, '공필/일선/교필/교선/교직', 1) Gongtong = ['실천I', '실천Ⅱ', '실천Ⅲ', '실천Ⅳ', '실천Ⅴ', '실천Ⅵ', '실천Ⅶ', '실천Ⅷ', '기독교인성과 섬김의리더I', '기독교인성과 섬김의리더Ⅱ'] UnGradu_Gongtong = check_courses(My_Gongtong, Gongtong) # 창의와 통섭, 소통 및 윤리적 행동, 글로컬 시민, 자기개발과 지식 탐구 영역 학점 체크 graduation_areas_check = check_graduation_areas(data) # 기본 영역 체크 total_credits = get_value(data, 3, 5) total_credits_check = total_credits >= 130 total_credits_diff = 130 - total_credits gpa = get_value(data, 3, 9, dtype=float) gpa_check = gpa >= 2.0 # 결과 출력 result = "" result += "----- 기본 영역을 만족하였는지 확인합니다. -----\n" if total_credits_check: result += "축하합니다. 총 학점이 130학점 이상입니다.\n" else: result += f"아쉽습니다. 총 학점을 130학점 이상 채우지 못하였습니다. {total_credits_diff} 학점 이상을 더 채우셔야 합니다.\n" if gpa_check: result += "축하합니다. 총 평점이 2.0 이상입니다.\n" else: result += "아쉽습니다. 총 평점이 2.0 이상이 아닙니다.\n" result += "\n----- 제 1전공 영역을 만족하였는지 확인합니다. -----\n" result += f"듣지 않은 전공 필수 과목 : {UnGradu_Major}\n" if not UnGradu_Major: result += "축하합니다. 전공필수 과목을 모두 이수하셨습니다.\n" else: result += "아쉽습니다. 위 항목의 전공필수 과목을 더 이수하셔야 합니다.\n" if total_major_credits >= 36: result += "축하합니다. 전공 학점이 36 이상입니다.\n" else: result += f"아쉽습니다. 전공 학점을 36학점 이상 채우지 못하였습니다. {36 - total_major_credits} 학점 이상을 더 채우셔야 합니다.\n" result += "\n----- 교양 영역을 만족하였는지 확인합니다. -----\n" result += f"듣지 않은 교양필수 과목 : {UnGradu_Culture}\n" if not UnGradu_Culture: result += "축하합니다. 교양필수 과목을 모두 이수하셨습니다.\n" else: result += "아쉽습니다. 위 항목의 교양필수 과목을 더 이수하셔야 합니다.\n" for area, satisfied in graduation_areas_check.items(): if area == '교양선택': result += f"{area} 영역(19학점 이상)을 만족하였는가? {satisfied}\n" else: result += f"{area} 영역(2학점 이상)을 만족하였는가? {satisfied}\n" result += "\n----- 공통 영역을 만족하였는지 확인합니다. -----\n" result += f"채워야 하는 공통 영역: {UnGradu_Gongtong}\n" if not UnGradu_Gongtong: result += "축하합니다. 공통 영역 과목을 모두 이수하셨습니다.\n" else: result += "아쉽습니다. 위 항목의 공통 영역 과목을 더 이수하셔야 합니다.\n" return result # Gradio 인터페이스 정의 footer = """ <div style='text-align: center;'> Copyrightⓒ2024. 김우석 & 김성훈. All Rights Reserved<br> 총신대학교 졸업 확인 시스템는 총신대학교 기독교융합콘텐츠 수업의 졸업연구 수강생인 김우석이 지도교수인 김성훈 교수와 함께 만들고 있습니다.<br> 이 시스템은 연구용으로 오류가 있을 수 있으므로 개인이 직접 다시 한번더 확인해보셔야 합니다. </div> """ # 인터페이스에 설명 추가 iface = gr.Blocks() with iface: gr.Markdown("<div style='text-align: center;'><h1>총신대 신학과 졸업 요건 확인(17~20학번)</h1></div>") gr.Markdown("<div style='text-align: center;'>파일을 업로드하여 졸업 요건을 확인하세요.</div>") gr.Markdown("<div style='text-align: center;'><a href='https://youtu.be/wL_esNF-Np4?si=RhnDG_K9Ao0e9HdW' target='_blank'>사용하는 법 동영상으로 알아보기</a></div>") gr.Markdown("<div style='text-align: center;'><a href='https://docs.google.com/forms/d/e/1FAIpQLSeJjJNz3Q0CGBqIOnwrLo66-Le1UKPIgV4-Y695bPGbAsZCYA/viewform' target='_blank'>참여 후 설문 부탁드립니다.</a></div>") with gr.Row(): file_input = gr.File(label="성적표(엑셀) 파일을 업로드하세요.") output = gr.Textbox(label="졸업 여부 확인란") submit_btn = gr.Button("Submit") submit_btn.click(fn=process_excel, inputs=file_input, outputs=output) gr.Markdown(footer) iface.launch(share=True)
xfu20/dummy
xfu20
"2024-06-23T17:25:49Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T17:23:10Z"
Entry not found
manisha12/qna
manisha12
"2024-06-23T17:23:55Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T17:23:55Z"
Entry not found
ismailpolas/e836fa13-f693-43bc-a6f5-b06e44bec6b9
ismailpolas
"2024-06-23T17:24:48Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T17:24:48Z"
Entry not found
maheshmnj/sft_openassistant-guanaco
maheshmnj
"2024-06-23T17:25:38Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T17:25:38Z"
Entry not found
dash8060/haechansoft
dash8060
"2024-06-23T17:30:37Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T17:29:35Z"
Entry not found
blockblockblock/llama3-turbcat-instruct-8b-bpw6-exl2
blockblockblock
"2024-06-23T17:34:56Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "license:llama3", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "6-bit", "exl2", "region:us" ]
text-generation
"2024-06-23T17:32:16Z"
--- license: llama3 --- # Turbcat 8b ![image/png](3.png) ![image/png](4.png) ![image/png](5.png) ![image/png](6.png) ![image/png](7.png) ![image/png](8.png) # Release notes This is a direct upgrade over cat 70B, with 2x the dataset size(2GB-> 5GB), added Chinese support with quality on par with the original English dataset. The medical COT portion of the dataset has been sponsored by steelskull, and the action packed character play portion was donated by Gryphe's(aesir dataset). Note that 8b is based on llama3 with limited Chinese support due to base model choice. The chat format in 8b is llama3. The 72b has more comprehensive Chinese support and the format will be chatml. # Data Generation In addition to the specified fortifications above, the data generation process is largely the same. Except for added Chinese Ph. D. Entrance exam, Traditional Chinese and Chinese story telling data. ## Special Highlights * 20 postdocs (10 Chinese, 10 English speaking doctors specialized in computational biology, biomed, biophysics and biochemistry)participated in the annotation process. * GRE and MCAT/Kaoyan questions were manually answered by the participants using strictly COT and BERT judges producing embeddings were trained based on the provided annotation. For an example of BERT embedding visualization and scoring, please refer to https://huggingface.co/turboderp/Cat-Llama-3-70B-instruct * Initial support of roleplay as api usage. When roleplaying as an API or function, the model does not produce irrelevant content that's not specified by the system prompt. # Task coverage ## Chinese tasks on par with English data ![image/png](1.png) For the Chinese portion of the dataset, we strictly kept its distribution and quality comparable to the English counterpart, as visualized by the close distance of the doublets. The overall QC is visualized by PCA after bert embedding ## Individual tasks Quality Checked by doctors For each cluster, we QC using BERT embeddings on an umap: ![image/png](2.png) The outliers have been manually checked by doctors. # Thirdparty dataset Thanks to the following people for their tremendous support for dataset generation: * steelskull for the medical COT dataset with gpt4o * Gryphe for the wonderful action packed dataset * Turbca for being turbca # Prompt format for 8b: **llama3** Example raw prompt: ``` <|begin_of_text|><|start_header_id|>system<|end_header_id|> CatGPT really likes its new cat ears and ends every message with Nyan_<|eot_id|><|start_header_id|>user<|end_header_id|> CatA: pats CatGPT cat ears<|eot_id|><|start_header_id|>assistant<|end_header_id|> CatGPT: ``` # Prompt format for 72b: **chatml** Example raw prompt: ``` <|im_start|>system CatGPT really likes its new cat ears and ends every message with Nyan_<|im_end|> <|im_start|>user CatA: pats CatGPT cat ears<|im_end|> <|im_start|>assistant CatGPT: ``` # Support Please join https://discord.gg/DwGz54Mz for model support
Niharrrrrr/luka_modric1
Niharrrrrr
"2024-06-23T17:45:40Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T17:33:15Z"
Entry not found
Madihaa/distilbert-base-uncased-Distilbert-Model
Madihaa
"2024-06-25T17:29:41Z"
0
0
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-06-23T17:35:07Z"
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer metrics: - f1 model-index: - name: distilbert-base-uncased-Distilbert-Model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-Distilbert-Model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.7383 - F1: 0.6823 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:------:|:----:|:---------------:|:------:| | 0.848 | 0.5015 | 500 | 0.7910 | 0.6663 | | 0.7872 | 1.0030 | 1000 | 0.7383 | 0.6823 | | 0.6766 | 1.5045 | 1500 | 0.7502 | 0.7054 | | 0.6854 | 2.0060 | 2000 | 0.7424 | 0.7096 | | 0.5239 | 2.5075 | 2500 | 0.9047 | 0.7219 | | 0.525 | 3.0090 | 3000 | 0.8375 | 0.7221 | | 0.3925 | 3.5105 | 3500 | 1.0093 | 0.7216 | | 0.4061 | 4.0120 | 4000 | 1.1403 | 0.7245 | | 0.2928 | 4.5135 | 4500 | 1.3150 | 0.6862 | | 0.3055 | 5.0150 | 5000 | 1.3811 | 0.7101 | | 0.2184 | 5.5165 | 5500 | 1.5753 | 0.6985 | | 0.23 | 6.0181 | 6000 | 1.5571 | 0.7122 | | 0.1705 | 6.5196 | 6500 | 1.6771 | 0.7155 | | 0.1416 | 7.0211 | 7000 | 1.7773 | 0.7089 | | 0.1085 | 7.5226 | 7500 | 1.9134 | 0.7124 | | 0.1437 | 8.0241 | 8000 | 1.8510 | 0.7118 | | 0.0967 | 8.5256 | 8500 | 2.0276 | 0.7074 | | 0.0733 | 9.0271 | 9000 | 2.1793 | 0.7112 | | 0.0671 | 9.5286 | 9500 | 2.1100 | 0.7118 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.1+cpu - Datasets 2.20.0 - Tokenizers 0.19.1
luishcarvalho/llama_smart_contract_GGUF
luishcarvalho
"2024-06-23T17:35:57Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T17:35:57Z"
Entry not found
MarcioGabriel20/gpt2
MarcioGabriel20
"2024-06-23T17:39:40Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T17:39:40Z"
Entry not found
jdollman/q-FrozenLake-v1-4x4-noSlippery
jdollman
"2024-06-23T17:41:12Z"
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
"2024-06-23T17:41:10Z"
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="jdollman/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
MrSimple07/MyDigitalClone
MrSimple07
"2024-06-23T17:41:27Z"
0
0
null
[ "license:mit", "region:us" ]
null
"2024-06-23T17:41:27Z"
--- license: mit ---
jdollman/Taxi-v3
jdollman
"2024-06-23T17:45:28Z"
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
"2024-06-23T17:45:26Z"
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="jdollman/Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
blockblockblock/llama3-turbcat-instruct-8b-bpw5.5-exl2
blockblockblock
"2024-06-23T17:49:13Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "license:llama3", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "exl2", "region:us" ]
text-generation
"2024-06-23T17:46:44Z"
--- license: llama3 --- # Turbcat 8b ![image/png](3.png) ![image/png](4.png) ![image/png](5.png) ![image/png](6.png) ![image/png](7.png) ![image/png](8.png) # Release notes This is a direct upgrade over cat 70B, with 2x the dataset size(2GB-> 5GB), added Chinese support with quality on par with the original English dataset. The medical COT portion of the dataset has been sponsored by steelskull, and the action packed character play portion was donated by Gryphe's(aesir dataset). Note that 8b is based on llama3 with limited Chinese support due to base model choice. The chat format in 8b is llama3. The 72b has more comprehensive Chinese support and the format will be chatml. # Data Generation In addition to the specified fortifications above, the data generation process is largely the same. Except for added Chinese Ph. D. Entrance exam, Traditional Chinese and Chinese story telling data. ## Special Highlights * 20 postdocs (10 Chinese, 10 English speaking doctors specialized in computational biology, biomed, biophysics and biochemistry)participated in the annotation process. * GRE and MCAT/Kaoyan questions were manually answered by the participants using strictly COT and BERT judges producing embeddings were trained based on the provided annotation. For an example of BERT embedding visualization and scoring, please refer to https://huggingface.co/turboderp/Cat-Llama-3-70B-instruct * Initial support of roleplay as api usage. When roleplaying as an API or function, the model does not produce irrelevant content that's not specified by the system prompt. # Task coverage ## Chinese tasks on par with English data ![image/png](1.png) For the Chinese portion of the dataset, we strictly kept its distribution and quality comparable to the English counterpart, as visualized by the close distance of the doublets. The overall QC is visualized by PCA after bert embedding ## Individual tasks Quality Checked by doctors For each cluster, we QC using BERT embeddings on an umap: ![image/png](2.png) The outliers have been manually checked by doctors. # Thirdparty dataset Thanks to the following people for their tremendous support for dataset generation: * steelskull for the medical COT dataset with gpt4o * Gryphe for the wonderful action packed dataset * Turbca for being turbca # Prompt format for 8b: **llama3** Example raw prompt: ``` <|begin_of_text|><|start_header_id|>system<|end_header_id|> CatGPT really likes its new cat ears and ends every message with Nyan_<|eot_id|><|start_header_id|>user<|end_header_id|> CatA: pats CatGPT cat ears<|eot_id|><|start_header_id|>assistant<|end_header_id|> CatGPT: ``` # Prompt format for 72b: **chatml** Example raw prompt: ``` <|im_start|>system CatGPT really likes its new cat ears and ends every message with Nyan_<|im_end|> <|im_start|>user CatA: pats CatGPT cat ears<|im_end|> <|im_start|>assistant CatGPT: ``` # Support Please join https://discord.gg/DwGz54Mz for model support
phunganhsang/XMLRoberta_Dataset59KCoDuoi
phunganhsang
"2024-06-23T17:47:20Z"
0
0
transformers
[ "transformers", "safetensors", "xlm-roberta", "text-classification", "generated_from_trainer", "base_model:FacebookAI/xlm-roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-06-23T17:46:47Z"
--- license: mit base_model: FacebookAI/xlm-roberta-base tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: XMLRoberta_Dataset59KCoDuoi results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # XMLRoberta_Dataset59KCoDuoi This model is a fine-tuned version of [FacebookAI/xlm-roberta-base](https://huggingface.co/FacebookAI/xlm-roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2879 - Accuracy: 0.9572 - F1: 0.9573 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-------:|:----:|:---------------:|:--------:|:------:| | No log | 0.5115 | 200 | 0.1969 | 0.9324 | 0.9325 | | No log | 1.0230 | 400 | 0.1623 | 0.9463 | 0.9467 | | No log | 1.5345 | 600 | 0.1687 | 0.9486 | 0.9487 | | 0.2066 | 2.0460 | 800 | 0.1706 | 0.9541 | 0.9544 | | 0.2066 | 2.5575 | 1000 | 0.1454 | 0.9548 | 0.9550 | | 0.2066 | 3.0691 | 1200 | 0.1511 | 0.9569 | 0.9571 | | 0.2066 | 3.5806 | 1400 | 0.1495 | 0.9564 | 0.9565 | | 0.1117 | 4.0921 | 1600 | 0.1576 | 0.9568 | 0.9568 | | 0.1117 | 4.6036 | 1800 | 0.1455 | 0.9551 | 0.9553 | | 0.1117 | 5.1151 | 2000 | 0.1526 | 0.9615 | 0.9616 | | 0.1117 | 5.6266 | 2200 | 0.1521 | 0.9582 | 0.9583 | | 0.0855 | 6.1381 | 2400 | 0.1516 | 0.9585 | 0.9587 | | 0.0855 | 6.6496 | 2600 | 0.1610 | 0.9577 | 0.9580 | | 0.0855 | 7.1611 | 2800 | 0.1592 | 0.9597 | 0.9599 | | 0.0855 | 7.6726 | 3000 | 0.1707 | 0.9565 | 0.9565 | | 0.0675 | 8.1841 | 3200 | 0.1708 | 0.9560 | 0.9562 | | 0.0675 | 8.6957 | 3400 | 0.1833 | 0.9542 | 0.9546 | | 0.0675 | 9.2072 | 3600 | 0.1713 | 0.9579 | 0.9579 | | 0.0675 | 9.7187 | 3800 | 0.1749 | 0.9586 | 0.9587 | | 0.0519 | 10.2302 | 4000 | 0.1781 | 0.9585 | 0.9587 | | 0.0519 | 10.7417 | 4200 | 0.1996 | 0.9575 | 0.9576 | | 0.0519 | 11.2532 | 4400 | 0.2032 | 0.9557 | 0.9560 | | 0.0519 | 11.7647 | 4600 | 0.1813 | 0.9573 | 0.9576 | | 0.0419 | 12.2762 | 4800 | 0.2248 | 0.9580 | 0.9582 | | 0.0419 | 12.7877 | 5000 | 0.2166 | 0.9574 | 0.9576 | | 0.0419 | 13.2992 | 5200 | 0.2183 | 0.9555 | 0.9557 | | 0.0419 | 13.8107 | 5400 | 0.2312 | 0.9559 | 0.9561 | | 0.0326 | 14.3223 | 5600 | 0.2248 | 0.9585 | 0.9586 | | 0.0326 | 14.8338 | 5800 | 0.2627 | 0.9555 | 0.9557 | | 0.0326 | 15.3453 | 6000 | 0.2449 | 0.9582 | 0.9583 | | 0.0326 | 15.8568 | 6200 | 0.2393 | 0.9595 | 0.9596 | | 0.0259 | 16.3683 | 6400 | 0.2676 | 0.9566 | 0.9568 | | 0.0259 | 16.8798 | 6600 | 0.2590 | 0.9577 | 0.9579 | | 0.0259 | 17.3913 | 6800 | 0.2616 | 0.9587 | 0.9589 | | 0.0259 | 17.9028 | 7000 | 0.2765 | 0.9568 | 0.9568 | | 0.0203 | 18.4143 | 7200 | 0.2862 | 0.9575 | 0.9576 | | 0.0203 | 18.9258 | 7400 | 0.2857 | 0.9581 | 0.9582 | | 0.0203 | 19.4373 | 7600 | 0.2859 | 0.9582 | 0.9583 | | 0.0203 | 19.9488 | 7800 | 0.2879 | 0.9572 | 0.9573 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.1.2 - Datasets 2.19.2 - Tokenizers 0.19.1
MrSimple07/MrSimple07
MrSimple07
"2024-06-23T17:48:07Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T17:48:07Z"
Entry not found
ErikFlom/Hanna_experimentv2
ErikFlom
"2024-06-23T17:51:49Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T17:49:46Z"
Entry not found
Virender13/Example-model
Virender13
"2024-06-23T17:59:28Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T17:50:35Z"
#EXAMPLE MODEL -- license: mit ---
C0ttontheBunny/ProjectNexusModels
C0ttontheBunny
"2024-06-24T02:55:04Z"
0
0
null
[ "license:openrail", "region:us" ]
null
"2024-06-23T17:51:24Z"
--- license: openrail ---
Niharrrrrr/rome_odunze1
Niharrrrrr
"2024-06-23T18:07:14Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T17:52:31Z"
Entry not found
ArisA1/train
ArisA1
"2024-06-23T18:39:54Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T17:57:52Z"
Entry not found
Elpepeasdsadad/sdasdasd
Elpepeasdsadad
"2024-06-23T18:00:06Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T18:00:06Z"
Entry not found
blockblockblock/llama3-turbcat-instruct-8b-bpw5-exl2
blockblockblock
"2024-06-23T18:03:17Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "license:llama3", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "5-bit", "exl2", "region:us" ]
text-generation
"2024-06-23T18:01:01Z"
--- license: llama3 --- # Turbcat 8b ![image/png](3.png) ![image/png](4.png) ![image/png](5.png) ![image/png](6.png) ![image/png](7.png) ![image/png](8.png) # Release notes This is a direct upgrade over cat 70B, with 2x the dataset size(2GB-> 5GB), added Chinese support with quality on par with the original English dataset. The medical COT portion of the dataset has been sponsored by steelskull, and the action packed character play portion was donated by Gryphe's(aesir dataset). Note that 8b is based on llama3 with limited Chinese support due to base model choice. The chat format in 8b is llama3. The 72b has more comprehensive Chinese support and the format will be chatml. # Data Generation In addition to the specified fortifications above, the data generation process is largely the same. Except for added Chinese Ph. D. Entrance exam, Traditional Chinese and Chinese story telling data. ## Special Highlights * 20 postdocs (10 Chinese, 10 English speaking doctors specialized in computational biology, biomed, biophysics and biochemistry)participated in the annotation process. * GRE and MCAT/Kaoyan questions were manually answered by the participants using strictly COT and BERT judges producing embeddings were trained based on the provided annotation. For an example of BERT embedding visualization and scoring, please refer to https://huggingface.co/turboderp/Cat-Llama-3-70B-instruct * Initial support of roleplay as api usage. When roleplaying as an API or function, the model does not produce irrelevant content that's not specified by the system prompt. # Task coverage ## Chinese tasks on par with English data ![image/png](1.png) For the Chinese portion of the dataset, we strictly kept its distribution and quality comparable to the English counterpart, as visualized by the close distance of the doublets. The overall QC is visualized by PCA after bert embedding ## Individual tasks Quality Checked by doctors For each cluster, we QC using BERT embeddings on an umap: ![image/png](2.png) The outliers have been manually checked by doctors. # Thirdparty dataset Thanks to the following people for their tremendous support for dataset generation: * steelskull for the medical COT dataset with gpt4o * Gryphe for the wonderful action packed dataset * Turbca for being turbca # Prompt format for 8b: **llama3** Example raw prompt: ``` <|begin_of_text|><|start_header_id|>system<|end_header_id|> CatGPT really likes its new cat ears and ends every message with Nyan_<|eot_id|><|start_header_id|>user<|end_header_id|> CatA: pats CatGPT cat ears<|eot_id|><|start_header_id|>assistant<|end_header_id|> CatGPT: ``` # Prompt format for 72b: **chatml** Example raw prompt: ``` <|im_start|>system CatGPT really likes its new cat ears and ends every message with Nyan_<|im_end|> <|im_start|>user CatA: pats CatGPT cat ears<|im_end|> <|im_start|>assistant CatGPT: ``` # Support Please join https://discord.gg/DwGz54Mz for model support
lmsantos/llama3-cpqd
lmsantos
"2024-06-23T18:14:08Z"
0
0
null
[ "safetensors", "summarization", "pt", "region:us" ]
summarization
"2024-06-23T18:02:05Z"
--- language: - pt pipeline_tag: summarization --- # Model Card para -------Upload no Hugging Face------- ## Dados Gerais - **Nome:** [lmsantos/llama3-cpqd](https://huggingface.co/lmsantos/llama3-cpqd) - **Tipo:** Languege Model, Transformer Decoder-Only - **Licença:** Language model - **Modelo base:** [unsloth/llama-3-8b-Instruct-bnb-4bit](https://huggingface.co/unsloth/llama-3-8b-Instruct-bnb-4bit) ## Resumo Este LLM é resultado de dois fine tuning para tarefas de sumarização aplicado ao modelo Llama 3, cuja arquitetura é "decoder-only". O primeiro fine tuning considerou o dataset XL-Sum [csebuetnlp/xlsum](https://huggingface.co/datasets/csebuetnlp/xlsum), o segundo foi baseado no dataset RecognaSumm [recogna-nlp/recognasumm](https://huggingface.co/datasets/recogna-nlp/recognasumm). ## Utilização Pretendida O modelo pode ser usado para tarefas de sumarização de textos em Português-BR. Não foi testado para outros idiomas. ### Uso ``` from peft import PeftModel from unsloth import FastLanguageModel import torch max_seq_length = 6144 dtype = None load_in_4bit = True if True: from unsloth import FastLanguageModel model, tokenizer = FastLanguageModel.from_pretrained( model_name = "lmsantos/llama3-cpqd", # YOUR MODEL YOU USED FOR TRAINING max_seq_length = max_seq_length, dtype = dtype, load_in_4bit = load_in_4bit, ) FastLanguageModel.for_inference(model) prompt = "<|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\nVocê é uma AI especializada em resumir textos em português.Resuma o texto a seguir:<|eot_id|><|start_header_id|>user<|end_header_id|>\n\n{}<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n{}<|eot_id|>" inputs = tokenizer( [ prompt.format( '''O presidente disse que enquanto mundo faz guerra, pessoas estão passando fome. Lula citou mensagem do papa Francisco. "Estou de acordo. O papa tem mandado seus cardeais que estão discutindo com Zelensky e com Putin", disse Lula ao confirmar que a guerra na Ucrânia foi pauta de seu encontro com o pontífice. Segundo Lula, nunca se sabe como está a cabeça dos dois presidentes, e até o momento, todos acham que vão ganhar, o dado concreto é que vidas estão sendo ceifadas, milhares de pessoas estão morrendo. "Precisamos ter gente envolvida discutindo a paz. É preciso parar de atirar" pediu o chefe do executivo do Brasil. O petista ainda propôs uma rodada de negociações, com interlocutores que os dois lados optarem. Para ele, uma opção poderia ser a ONU (Organização das Nações Unidas). ''', # input "", # output - leave this blank for generation! ) ], return_tensors = "pt").to("cuda") from transformers import TextStreamer outputs = model.generate(**inputs, max_new_tokens = 512) tokenizer.batch_decode(outputs) print(tokenizer.decode(outputs[0])) ``` ## Idiomas Português-BR ## Dados de Treinamento Os dados de treino que foram considerados neste modelo provém, primeiramente, do dataset XL-Sum [csebuetnlp/xlsum](https://huggingface.co/datasets/csebuetnlp/xlsum) e também no dataset RecognaSumm [recogna-nlp/recognasumm](https://huggingface.co/datasets/recogna-nlp/recognasumm), ambos compostos por textos de notícias, e estruturados de modo que há o texto original e o sumário de cada notícia. Ou seja, trata-se de um conteúdo típico de aprendizagem supervisionado.
dash8060/luda
dash8060
"2024-06-23T18:04:47Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T18:04:08Z"
Entry not found