modelId
stringlengths
5
122
author
stringlengths
2
42
last_modified
unknown
downloads
int64
0
738M
likes
int64
0
11k
library_name
stringclasses
245 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
48 values
createdAt
unknown
card
stringlengths
1
901k
alexgrigore/videomae-base-finetuned-good-gesturePhaseV10
alexgrigore
"2024-06-10T09:48:21Z"
0
0
transformers
[ "transformers", "safetensors", "videomae", "video-classification", "generated_from_trainer", "base_model:MCG-NJU/videomae-base", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
video-classification
"2024-06-10T09:25:40Z"
--- license: cc-by-nc-4.0 base_model: MCG-NJU/videomae-base tags: - generated_from_trainer metrics: - accuracy model-index: - name: videomae-base-finetuned-good-gesturePhaseV10 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # videomae-base-finetuned-good-gesturePhaseV10 This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset. It achieves the following results on the evaluation set: - Accuracy: 0.9253 - Loss: 0.3122 - Accuracy Hold: 1.0 - Accuracy Stroke: 0.4286 - Accuracy Recovery: 0.7895 - Accuracy Preparation: 1.0 - Accuracy Unknown: 0.6429 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 630 ### Training results | Training Loss | Epoch | Step | Accuracy | Validation Loss | Accuracy Hold | Accuracy Stroke | Accuracy Recovery | Accuracy Preparation | Accuracy Unknown | |:-------------:|:------:|:----:|:--------:|:---------------:|:-------------:|:---------------:|:-----------------:|:--------------------:|:----------------:| | 1.1344 | 0.2016 | 127 | 0.6900 | 1.0021 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | | 0.5961 | 1.2016 | 254 | 0.7948 | 0.6022 | 0.2692 | 0.0 | 0.0588 | 0.9873 | 0.8182 | | 0.3453 | 2.2016 | 381 | 0.8777 | 0.3925 | 0.8077 | 0.0 | 0.4118 | 0.9747 | 0.8636 | | 0.1551 | 3.2016 | 508 | 0.9432 | 0.2178 | 0.9615 | 0.1667 | 0.7059 | 0.9937 | 0.9545 | | 0.1213 | 4.1937 | 630 | 0.9476 | 0.2032 | 0.9615 | 0.6667 | 0.7059 | 1.0 | 0.8182 | ### Framework versions - Transformers 4.41.1 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
mandarchaudharii/maintenancesolution_old
mandarchaudharii
"2024-06-10T10:48:23Z"
0
0
transformers
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-06-10T09:27:06Z"
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
kevin009/llamamathv3
kevin009
"2024-06-10T09:30:59Z"
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-Instruct-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-10T09:27:27Z"
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl base_model: unsloth/llama-3-8b-Instruct-bnb-4bit --- # Uploaded model - **Developed by:** kevin009 - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Mareksal/example-model
Mareksal
"2024-06-10T09:38:59Z"
0
0
null
[ "region:us" ]
null
"2024-06-10T09:28:53Z"
# Example model This is my model card README --- license: mit ---
LeoKuo49/whisper-large-aiaia
LeoKuo49
"2024-06-10T09:31:34Z"
0
0
null
[ "region:us" ]
null
"2024-06-10T09:31:34Z"
Entry not found
llmvetter/CartpoleV1
llmvetter
"2024-06-10T09:32:31Z"
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
"2024-06-10T09:32:17Z"
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: CartpoleV1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 405.53 +/- 121.70 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
ismailpolas/d36d5237-bff6-4896-be71-5196f2a60aaf
ismailpolas
"2024-06-10T09:33:02Z"
0
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-06-10T09:32:32Z"
Entry not found
RabidUmarell/dns-roberta-bpe
RabidUmarell
"2024-06-10T09:32:34Z"
0
0
null
[ "region:us" ]
null
"2024-06-10T09:32:34Z"
Entry not found
ffkk200/TEST2
ffkk200
"2024-06-10T09:37:12Z"
0
0
null
[ "region:us" ]
null
"2024-06-10T09:36:12Z"
Entry not found
jnalwa/taskAuto
jnalwa
"2024-06-10T09:42:19Z"
0
0
null
[ "license:apache-2.0", "region:us" ]
null
"2024-06-10T09:36:53Z"
--- license: apache-2.0 ---
fatyanosa/Komodo-7B-squadpairs-indo
fatyanosa
"2024-06-10T09:47:45Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "8-bit", "bitsandbytes", "region:us" ]
text-generation
"2024-06-10T09:42:22Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
amit0103/code-llama-7b-text-to-sql
amit0103
"2024-06-10T09:43:21Z"
0
0
null
[ "region:us" ]
null
"2024-06-10T09:43:21Z"
Entry not found
Benphil/CoT-multiDomain-Summ
Benphil
"2024-06-10T16:08:30Z"
0
0
transformers
[ "transformers", "safetensors", "pegasus", "text2text-generation", "generated_from_trainer", "base_model:google/pegasus-cnn_dailymail", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
"2024-06-10T09:43:33Z"
--- base_model: google/pegasus-cnn_dailymail tags: - generated_from_trainer model-index: - name: CoT-multiDomain-Summ results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # CoT-multiDomain-Summ This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1456 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 438 | 1.2374 | | 4.18 | 2.0 | 876 | 1.1642 | | 1.1654 | 3.0 | 1314 | 1.1482 | | 1.0725 | 4.0 | 1752 | 1.1456 | ### Framework versions - Transformers 4.41.1 - Pytorch 2.3.0+cu118 - Datasets 2.19.1 - Tokenizers 0.19.1
1024m/WASSA2024-3A-LLAMA3-8B-Ints-Demo-lora
1024m
"2024-06-10T09:44:56Z"
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-10T09:44:46Z"
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl base_model: unsloth/llama-3-8b-bnb-4bit --- # Uploaded model - **Developed by:** 1024m - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
ar9av/idefics2-8b-finetuned-xaxis
ar9av
"2024-06-10T09:47:59Z"
0
0
null
[ "tensorboard", "safetensors", "generated_from_trainer", "base_model:HuggingFaceM4/idefics2-8b", "license:apache-2.0", "region:us" ]
null
"2024-06-10T09:47:53Z"
--- license: apache-2.0 base_model: HuggingFaceM4/idefics2-8b tags: - generated_from_trainer model-index: - name: idefics2-8b-finetuned-xaxis results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # idefics2-8b-finetuned-xaxis This model is a fine-tuned version of [HuggingFaceM4/idefics2-8b](https://huggingface.co/HuggingFaceM4/idefics2-8b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 50 - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.42.0.dev0 - Pytorch 2.0.1+cu118 - Datasets 2.19.2 - Tokenizers 0.19.1
badrabdullah/xls-r-300-cv17-polish-adap-cs
badrabdullah
"2024-06-10T20:05:52Z"
0
0
transformers
[ "transformers", "safetensors", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice_17_0", "base_model:facebook/wav2vec2-xls-r-300m", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
"2024-06-10T09:48:20Z"
--- license: apache-2.0 base_model: facebook/wav2vec2-xls-r-300m tags: - generated_from_trainer datasets: - common_voice_17_0 metrics: - wer model-index: - name: xls-r-300-cv17-polish-adap-cs results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: common_voice_17_0 type: common_voice_17_0 config: pl split: validation args: pl metrics: - name: Wer type: wer value: 0.3181674482322567 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/badr-nlp/xlsr-continual-finetuning-polish/runs/gugvjjo9) # xls-r-300-cv17-polish-adap-cs This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice_17_0 dataset. It achieves the following results on the evaluation set: - Loss: 0.4585 - Wer: 0.3182 - Cer: 0.0713 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | Cer | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:| | 3.5986 | 1.6 | 100 | 3.9654 | 0.9986 | 0.9660 | | 3.2886 | 3.2 | 200 | 3.4889 | 1.0 | 1.0 | | 3.1683 | 4.8 | 300 | 3.1937 | 0.9946 | 0.9735 | | 2.7362 | 6.4 | 400 | 2.6853 | 1.0 | 0.8424 | | 0.6269 | 8.0 | 500 | 0.5183 | 0.5745 | 0.1381 | | 0.2661 | 9.6 | 600 | 0.4218 | 0.4551 | 0.1048 | | 0.1646 | 11.2 | 700 | 0.4160 | 0.4211 | 0.0985 | | 0.1197 | 12.8 | 800 | 0.4793 | 0.4578 | 0.1072 | | 0.1925 | 14.4 | 900 | 0.4402 | 0.4283 | 0.0969 | | 0.1132 | 16.0 | 1000 | 0.4253 | 0.3909 | 0.0906 | | 0.0851 | 17.6 | 1100 | 0.4609 | 0.3951 | 0.0921 | | 0.0799 | 19.2 | 1200 | 0.4453 | 0.3944 | 0.0907 | | 0.0657 | 20.8 | 1300 | 0.4681 | 0.3846 | 0.0887 | | 0.1188 | 22.4 | 1400 | 0.4575 | 0.3785 | 0.0873 | | 0.1088 | 24.0 | 1500 | 0.4649 | 0.3824 | 0.0882 | | 0.0698 | 25.6 | 1600 | 0.4496 | 0.3611 | 0.0817 | | 0.0575 | 27.2 | 1700 | 0.4459 | 0.3585 | 0.0822 | | 0.0705 | 28.8 | 1800 | 0.4542 | 0.3608 | 0.0820 | | 0.0524 | 30.4 | 1900 | 0.4785 | 0.3549 | 0.0814 | | 0.0338 | 32.0 | 2000 | 0.4566 | 0.3521 | 0.0801 | | 0.0357 | 33.6 | 2100 | 0.4597 | 0.3472 | 0.0783 | | 0.0477 | 35.2 | 2200 | 0.4626 | 0.3451 | 0.0788 | | 0.0478 | 36.8 | 2300 | 0.4730 | 0.3375 | 0.0765 | | 0.0568 | 38.4 | 2400 | 0.4713 | 0.3333 | 0.0749 | | 0.0217 | 40.0 | 2500 | 0.4701 | 0.3324 | 0.0755 | | 0.0404 | 41.6 | 2600 | 0.4585 | 0.3278 | 0.0740 | | 0.0118 | 43.2 | 2700 | 0.4656 | 0.3259 | 0.0736 | | 0.0374 | 44.8 | 2800 | 0.4625 | 0.3249 | 0.0731 | | 0.0417 | 46.4 | 2900 | 0.4599 | 0.3206 | 0.0721 | | 0.0378 | 48.0 | 3000 | 0.4614 | 0.3195 | 0.0717 | | 0.0381 | 49.6 | 3100 | 0.4585 | 0.3182 | 0.0713 | ### Framework versions - Transformers 4.42.0.dev0 - Pytorch 2.3.1+cu121 - Datasets 2.19.2 - Tokenizers 0.19.1
badrabdullah/xls-r-300-cv17-czech-adap-pl
badrabdullah
"2024-06-10T15:57:13Z"
0
0
transformers
[ "transformers", "safetensors", "wav2vec2", "automatic-speech-recognition", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
"2024-06-10T09:50:16Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
sjyuxyz/web3mmlu
sjyuxyz
"2024-06-10T09:51:48Z"
0
0
null
[ "region:us" ]
null
"2024-06-10T09:51:48Z"
Entry not found
zzangten/KickPigeon3
zzangten
"2024-06-10T09:53:31Z"
0
0
null
[ "license:other", "region:us" ]
null
"2024-06-10T09:53:31Z"
--- license: other license_name: yolo-v license_link: LICENSE ---
Nelly43/fossil_sub
Nelly43
"2024-06-10T09:53:42Z"
0
0
null
[ "region:us" ]
null
"2024-06-10T09:53:42Z"
Entry not found
TFEE/KickPigeon3
TFEE
"2024-06-10T09:56:34Z"
0
0
null
[ "license:other", "region:us" ]
null
"2024-06-10T09:56:34Z"
--- license: other license_name: yolo-v5 license_link: LICENSE ---
Anzovi/distilBERT-news-ru
Anzovi
"2024-06-12T12:31:32Z"
0
0
transformers
[ "transformers", "safetensors", "DistilBERTClassRus", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-06-10T09:58:04Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
kiiwee/Detectron2_FasterRCNN_InsectDetect
kiiwee
"2024-06-10T11:26:03Z"
0
0
null
[ "license:apache-2.0", "region:us" ]
null
"2024-06-10T09:59:01Z"
--- license: apache-2.0 ---
ChatK/ner-tokenizer
ChatK
"2024-06-10T10:53:42Z"
0
0
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-06-10T09:59:04Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
aleoaaaa/camembert2camembert_shared-finetuned-french-summarization_finetuned_10_06_11_59
aleoaaaa
"2024-06-10T09:59:21Z"
0
0
null
[ "region:us" ]
null
"2024-06-10T09:59:21Z"
Entry not found
alexgrigore/videomae-base-finetuned-good-gesturePhaseV11
alexgrigore
"2024-06-10T10:44:22Z"
0
0
transformers
[ "transformers", "safetensors", "videomae", "video-classification", "generated_from_trainer", "base_model:MCG-NJU/videomae-base", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
video-classification
"2024-06-10T09:59:43Z"
--- license: cc-by-nc-4.0 base_model: MCG-NJU/videomae-base tags: - generated_from_trainer metrics: - accuracy model-index: - name: videomae-base-finetuned-good-gesturePhaseV11 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # videomae-base-finetuned-good-gesturePhaseV11 This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset. It achieves the following results on the evaluation set: - Accuracy: 0.9461 - Loss: 0.1967 - Accuracy Hold: 1.0 - Accuracy Stroke: 0.4286 - Accuracy Recovery: 0.8947 - Accuracy Preparation: 0.9686 - Accuracy Unknown: 0.9286 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 1260 ### Training results | Training Loss | Epoch | Step | Accuracy | Validation Loss | Accuracy Hold | Accuracy Stroke | Accuracy Recovery | Accuracy Preparation | Accuracy Unknown | |:-------------:|:------:|:----:|:--------:|:---------------:|:-------------:|:---------------:|:-----------------:|:--------------------:|:----------------:| | 1.2097 | 0.1008 | 127 | 0.6900 | 1.0243 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | | 0.8 | 1.1008 | 254 | 0.6900 | 0.8717 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | | 0.5199 | 2.1008 | 381 | 0.7729 | 0.6725 | 0.0 | 0.0 | 0.1765 | 0.9810 | 0.8636 | | 0.3134 | 3.1008 | 508 | 0.8428 | 0.4715 | 0.3077 | 0.0 | 0.4118 | 1.0 | 0.9091 | | 0.1561 | 4.1008 | 635 | 0.8952 | 0.4363 | 0.7692 | 0.0 | 0.7059 | 1.0 | 0.6818 | | 0.0429 | 5.1008 | 762 | 0.9432 | 0.2211 | 0.8846 | 0.5 | 0.7059 | 0.9937 | 0.9545 | | 0.2294 | 6.1008 | 889 | 0.9476 | 0.2094 | 0.8846 | 0.1667 | 0.8824 | 1.0 | 0.9091 | | 0.1214 | 7.1008 | 1016 | 0.9607 | 0.1586 | 0.8846 | 0.6667 | 0.8824 | 0.9937 | 0.9545 | | 0.1478 | 8.1008 | 1143 | 0.9432 | 0.1607 | 0.8846 | 0.6667 | 0.8824 | 0.9684 | 0.9545 | | 0.1156 | 9.0929 | 1260 | 0.9738 | 0.1177 | 1.0 | 0.6667 | 0.8824 | 0.9937 | 0.9545 | ### Framework versions - Transformers 4.41.1 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
bckang/zephyr-7b-sft-full
bckang
"2024-06-10T10:00:41Z"
0
0
null
[ "region:us" ]
null
"2024-06-10T10:00:41Z"
Entry not found
MicardiumGreece/MicardiumGreece
MicardiumGreece
"2024-06-10T10:05:06Z"
0
0
null
[ "license:apache-2.0", "region:us" ]
null
"2024-06-10T10:02:07Z"
--- license: apache-2.0 --- Τι είναι το Micardium; Το Micardium Χάπια είναι μια πρωτοποριακή κάψουλα υπέρτασης ειδικά σχεδιασμένη για να βοηθά στην αποτελεσματική διαχείριση της υψηλής αρτηριακής πίεσης. Η υψηλή αρτηριακή πίεση ή η υπέρταση είναι μια κοινή πάθηση που μπορεί να οδηγήσει σε σοβαρά προβλήματα υγείας, όπως καρδιακές παθήσεις, εγκεφαλικό επεισόδιο και νεφρική ανεπάρκεια, εάν αφεθεί χωρίς θεραπεία. Το Micardium κάψουλα συνδυάζει ένα ισχυρό μείγμα φυσικών συστατικών που έχουν σχεδιαστεί για να υποστηρίζουν την καρδιαγγειακή υγεία, να μειώνουν τα επίπεδα της αρτηριακής πίεσης και να βελτιώνουν τη συνολική ευεξία. Αυτή η ολοκληρωμένη λύση στοχεύει να παρέχει μια φυσική και αποτελεσματική προσέγγιση για τη διαχείριση της υπέρτασης. Επίσημη ιστοσελίδα:<a href="https://www.nutritionsee.com/miczgreks">www.Micardium.com</a> <p><a href="https://www.nutritionsee.com/miczgreks"> <img src="https://www.nutritionsee.com/wp-content/uploads/2024/06/Micardium-Greece-1.png" alt="enter image description here"> </a></p> <a href="https://www.nutritionsee.com/miczgreks">Αγορασε τωρα!! Κάντε κλικ στον παρακάτω σύνδεσμο για περισσότερες πληροφορίες και κερδίστε έκπτωση 50% τώρα... Βιαστείτε </a> Επίσημη ιστοσελίδα:<a href="https://www.nutritionsee.com/miczgreks">www.Micardium.com</a>
kiiwee/Yolov8_InsectDetect
kiiwee
"2024-06-10T10:03:51Z"
0
0
null
[ "license:apache-2.0", "region:us" ]
null
"2024-06-10T10:03:04Z"
--- license: apache-2.0 ---
woweenie/v68-run1-4-merge
woweenie
"2024-06-10T13:10:29Z"
0
0
diffusers
[ "diffusers", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
"2024-06-10T10:03:42Z"
Entry not found
CaiusDai/my_awesome_model
CaiusDai
"2024-06-10T10:03:50Z"
0
0
null
[ "region:us" ]
null
"2024-06-10T10:03:50Z"
Entry not found
elenaovv/IceSpike-igc-2-3
elenaovv
"2024-06-10T10:09:06Z"
0
0
null
[ "region:us" ]
null
"2024-06-10T10:06:32Z"
Entry not found
elenaovv/IceSpike-imdb
elenaovv
"2024-06-10T10:09:35Z"
0
0
null
[ "region:us" ]
null
"2024-06-10T10:07:32Z"
Entry not found
Boostaro155/Slim9696
Boostaro155
"2024-06-10T10:08:56Z"
0
0
null
[ "region:us" ]
null
"2024-06-10T10:08:33Z"
# Slim Gummies Erfahrungen Höhle der löwen - Slim Gummies Deutschland Offizieller Preis, kaufen Slim Gummies Erfahrungen Höhle der löwen Diese natürlichen und klinisch erprobten Gummis sollen Menschen helfen, gesundes Gewicht zu verlieren und schlank zu werden. Für diejenigen, die Nahrungsergänzungsmittel einnehmen möchten, sind Softgel-Kapseln mit den natürlichen Inhaltsstoffen der Formel erhältlich. Es handelt sich um eine Kapsel zur oralen Fettverbrennung, die Ihren Körper auch daran hindert, Fett zu speichern. ## **[Klicken Sie hier, um Slim Gummies jetzt auf der offiziellen Website zu kaufen](https://slim-gummies-deutschland.de/)** ## Verarbeitbarkeit von Schlankheitsgummis „Slimming Gummies “ ist eine gut dokumentierte Start-Burning-Formel, die mithilfe natürlicher Inhaltsstoffe die Ketose herbeiführt. Es bietet starke Ergebnisse bei der Fettverbrennung mit einer übernatürlichen Mischung. Das beste Abnehmspiel, das von Profis eingesetzt wird, hat das Potenzial, Sie von vielen Krankheiten fernzuhalten. Es handelt sich nicht nur um ein Fitnessprogramm, sondern um eine Wellness-Option, die Ihnen die Kraft von Beta-Hydroxybutyrat-Ketonen für schlankere Ergebnisse verleiht. Die mit Erdbeeren und Äpfeln angereicherte Formel enthält natürliches Stevia für ihre Süße. Kein zugesetzter Zucker, sondern nur Kräuterextrakte für eine schnelle Fettverbrennung und kraftvolle Ergebnisse. Mit der eigenständigen Formel können Sie Ihre Körperform verbessern und gleichzeitig mehr Muskelmasse aufbauen. Die Heilwirkung wirkt sich sehr gut auf die Lebergesundheit aus. Es unterstützt einen gesunden Stoffwechsel, sodass Sie tatsächlich Fett verbrennen und übermäßiges Essen vermeiden können. Lassen Sie nicht zu, dass Ihr Körper Kalorien ansammelt, sondern nehmen Sie die Hilfe dieser speziellen Option in Anspruch. Die Inhaltsstoffe von Sliming Gummies sind absolut wirksam und gut gekennzeichnet, um Ergebnisse zu erzielen. Es enthält natürliche Konzentrate und Extrakte, was bedeutet, dass der Benutzer es ohne Risiko und Bedenken einnehmen kann. Der Monatsvorrat des Gummibärchens besteht aus einer Packung mit 30 Kapseln. Sie sollten sie regelmäßig einmal morgens und einmal abends einnehmen, um die Flüssigkeitszufuhr aufrechtzuerhalten. Kombinieren Sie Routineübungen für beste Ergebnisse und eine gesunde Kombination. ## Was sind die genauen Vorteile der Wahl von Schlankheitsgummis? Die Vorteile der Wahl von Schlankheitsgummis sind vielfältig. Die Therapie liefert zuverlässige, sichere und sehr spürbare Ergebnisse. Die Formel zur Fettverbrennung versetzt den Körper in einen aktiven Zustand. Es kann Ihnen helfen, Ihre Abnehmziele mit mehr Energie und geistiger Ruhe zu erreichen. Hier sind einige Vorteile der Wahl der besten Formel zur Gewichtsredtion ## Bequem zu konsumieren Der Verzehr von Schlankheitsgummis ist denkbar einfach, da keine komplizierten Regeln beachtet werden müssen. Geben Sie einfach jeweils ein Gummibärchen hinzu, um die richtigen Nährstoffe zu erhalten. Tragen Sie es zweimal täglich auf. Dies fördert müheloses Training mit Gewichten und eine schnellere Fettverbrennung. ## Sicher und risikofrei Sliming Gummies ist absolut risikofrei, da es eine hundertprozentige Geld-zurück-Garantie gibt. Jeder Benutzer, der mit der Wahl der Formel unzufrieden ist, kann das Geld auf der Website des Herstellers zurückfordern ## Bessere geistige Klarheit Wenn Sie überschüssiges giftiges Fett und unerwünschte Elemente loswerden, kommt es auf natürliche Weise zu einer besseren geistigen Funktion. Erleben Sie ein optimales Energieniveau und eine bessere Konzentration mit der hochwertigen Abnehmformel. Es ist wirklich nährend für den ganzen Körper von oben bis unten. ## Verbesserte Gesundheit Schlankheitsgummis sorgen für eine bessere Gesundheit mit Triglyceridwerten, die die ordnungsgemäße Herz-Kreislauf-Funktion aufrechterhalten. Die hochwertigen Gummibärchen fördern den Umstellungsprozess und sorgen dafür, dass sich Anwender beim Abnehmen wohlfühlen. Vorsichtsmaßnahmen und Einschränkungen für Schlankheitsgummis Schlankheitsgummis sind äußerst wirksam zur Gewichtsreduzierung. Sie müssen Folgendes beachten: Perfekt geeignet für alle, insbesondere für diejenigen, die unter schweren Erkrankungen leiden und nicht in der Lage sind, Gewicht zu verlieren Aus irgendeinem Grund nicht für schwangere und stillende Frauen empfohlen Es ist sehr wichtig, dass Sie beim Verzehr Ihre Routine im Griff haben. Bringen Sie keine Lücken mit und konsumieren Sie keine alternativen Möglichkeiten ## **[Klicken Sie hier, um Slim Gummies jetzt auf der offiziellen Website zu kaufen](https://slim-gummies-deutschland.de/)**
Oslaw/my_mnist_model
Oslaw
"2024-06-10T10:13:24Z"
0
0
null
[ "tensorboard", "safetensors", "region:us" ]
null
"2024-06-10T10:10:04Z"
Entry not found
elenaovv/IceSpike-imdb-max_seq_lenth_256
elenaovv
"2024-06-10T10:17:09Z"
0
0
null
[ "region:us" ]
null
"2024-06-10T10:10:14Z"
Entry not found
tranthaihoa/mistral_context
tranthaihoa
"2024-06-10T10:10:42Z"
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "mistral", "trl", "en", "base_model:unsloth/mistral-7b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-10T10:10:23Z"
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - mistral - trl base_model: unsloth/mistral-7b-bnb-4bit --- # Uploaded model - **Developed by:** tranthaihoa - **License:** apache-2.0 - **Finetuned from model :** unsloth/mistral-7b-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
imdatta0/qwen2_Magiccoder_evol_10k_ortho
imdatta0
"2024-06-10T14:40:18Z"
0
0
peft
[ "peft", "safetensors", "unsloth", "generated_from_trainer", "base_model:Qwen/Qwen2-7B", "license:apache-2.0", "region:us" ]
null
"2024-06-10T10:12:01Z"
--- license: apache-2.0 library_name: peft tags: - unsloth - generated_from_trainer base_model: Qwen/Qwen2-7B model-index: - name: qwen2_Magiccoder_evol_10k_ortho results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # qwen2_Magiccoder_evol_10k_ortho This model is a fine-tuned version of [Qwen/Qwen2-7B](https://huggingface.co/Qwen/Qwen2-7B) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.8039 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 0.02 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.8045 | 0.0261 | 4 | 0.8796 | | 0.8394 | 0.0522 | 8 | 0.8315 | | 0.8027 | 0.0784 | 12 | 0.8188 | | 0.7742 | 0.1045 | 16 | 0.8136 | | 0.8206 | 0.1306 | 20 | 0.8118 | | 0.7117 | 0.1567 | 24 | 0.8110 | | 0.7248 | 0.1828 | 28 | 0.8097 | | 0.893 | 0.2089 | 32 | 0.8113 | | 0.7788 | 0.2351 | 36 | 0.8096 | | 0.8043 | 0.2612 | 40 | 0.8098 | | 0.8427 | 0.2873 | 44 | 0.8108 | | 0.8171 | 0.3134 | 48 | 0.8098 | | 0.7509 | 0.3395 | 52 | 0.8103 | | 0.7373 | 0.3656 | 56 | 0.8105 | | 0.7708 | 0.3918 | 60 | 0.8107 | | 0.7942 | 0.4179 | 64 | 0.8109 | | 0.8188 | 0.4440 | 68 | 0.8103 | | 0.768 | 0.4701 | 72 | 0.8100 | | 0.786 | 0.4962 | 76 | 0.8095 | | 0.7728 | 0.5223 | 80 | 0.8094 | | 0.8575 | 0.5485 | 84 | 0.8091 | | 0.7635 | 0.5746 | 88 | 0.8088 | | 0.8469 | 0.6007 | 92 | 0.8082 | | 0.7647 | 0.6268 | 96 | 0.8078 | | 0.8741 | 0.6529 | 100 | 0.8073 | | 0.7574 | 0.6790 | 104 | 0.8067 | | 0.8048 | 0.7052 | 108 | 0.8061 | | 0.7615 | 0.7313 | 112 | 0.8056 | | 0.7452 | 0.7574 | 116 | 0.8051 | | 0.7191 | 0.7835 | 120 | 0.8049 | | 0.7999 | 0.8096 | 124 | 0.8046 | | 0.7317 | 0.8357 | 128 | 0.8045 | | 0.8619 | 0.8619 | 132 | 0.8044 | | 0.8071 | 0.8880 | 136 | 0.8040 | | 0.8034 | 0.9141 | 140 | 0.8040 | | 0.7892 | 0.9402 | 144 | 0.8040 | | 0.8291 | 0.9663 | 148 | 0.8040 | | 0.7938 | 0.9925 | 152 | 0.8039 | ### Framework versions - PEFT 0.7.1 - Transformers 4.40.2 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
imdatta0/llama_2_13b_Magiccoder_evol_10k_ortho
imdatta0
"2024-06-10T13:52:03Z"
0
0
peft
[ "peft", "safetensors", "unsloth", "generated_from_trainer", "base_model:meta-llama/Llama-2-13b-hf", "license:llama2", "region:us" ]
null
"2024-06-10T10:14:00Z"
--- license: llama2 library_name: peft tags: - unsloth - generated_from_trainer base_model: meta-llama/Llama-2-13b-hf model-index: - name: llama_2_13b_Magiccoder_evol_10k_ortho results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # llama_2_13b_Magiccoder_evol_10k_ortho This model is a fine-tuned version of [meta-llama/Llama-2-13b-hf](https://huggingface.co/meta-llama/Llama-2-13b-hf) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.0888 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 0.02 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.1928 | 0.0262 | 4 | 1.1781 | | 1.1605 | 0.0523 | 8 | 1.1408 | | 1.088 | 0.0785 | 12 | 1.1262 | | 1.0449 | 0.1047 | 16 | 1.1204 | | 1.0849 | 0.1308 | 20 | 1.1163 | | 1.0541 | 0.1570 | 24 | 1.1123 | | 1.0556 | 0.1832 | 28 | 1.1095 | | 1.1005 | 0.2093 | 32 | 1.1065 | | 1.0338 | 0.2355 | 36 | 1.1052 | | 1.1096 | 0.2617 | 40 | 1.1049 | | 1.092 | 0.2878 | 44 | 1.1031 | | 1.1322 | 0.3140 | 48 | 1.1033 | | 1.1075 | 0.3401 | 52 | 1.0993 | | 1.0792 | 0.3663 | 56 | 1.1015 | | 1.1255 | 0.3925 | 60 | 1.1013 | | 1.1261 | 0.4186 | 64 | 1.0988 | | 1.0801 | 0.4448 | 68 | 1.0975 | | 1.0908 | 0.4710 | 72 | 1.0936 | | 1.0374 | 0.4971 | 76 | 1.0943 | | 1.1264 | 0.5233 | 80 | 1.0945 | | 1.1409 | 0.5495 | 84 | 1.0943 | | 1.0822 | 0.5756 | 88 | 1.0942 | | 1.0301 | 0.6018 | 92 | 1.0908 | | 1.03 | 0.6280 | 96 | 1.0900 | | 1.0939 | 0.6541 | 100 | 1.0902 | | 1.083 | 0.6803 | 104 | 1.0902 | | 1.0721 | 0.7065 | 108 | 1.0905 | | 1.105 | 0.7326 | 112 | 1.0905 | | 1.0472 | 0.7588 | 116 | 1.0899 | | 1.0728 | 0.7850 | 120 | 1.0889 | | 1.0905 | 0.8111 | 124 | 1.0885 | | 1.0513 | 0.8373 | 128 | 1.0887 | | 0.9946 | 0.8635 | 132 | 1.0890 | | 1.1033 | 0.8896 | 136 | 1.0889 | | 1.0145 | 0.9158 | 140 | 1.0890 | | 1.0689 | 0.9419 | 144 | 1.0890 | | 1.0904 | 0.9681 | 148 | 1.0889 | | 1.0749 | 0.9943 | 152 | 1.0888 | ### Framework versions - PEFT 0.7.1 - Transformers 4.40.2 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
RohitDoctor/phi2_finetuned
RohitDoctor
"2024-06-10T10:14:05Z"
0
0
null
[ "license:unknown", "region:us" ]
null
"2024-06-10T10:14:05Z"
--- license: unknown ---
Oslaw/bone_classifier
Oslaw
"2024-06-10T10:18:26Z"
0
0
null
[ "region:us" ]
null
"2024-06-10T10:18:26Z"
Entry not found
JetBrains-Research/swe-sft-tmp
JetBrains-Research
"2024-06-10T21:56:21Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-06-10T10:19:07Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
MsgmSgmsG/llama-3-8b
MsgmSgmsG
"2024-06-10T10:38:00Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-06-10T10:19:17Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
badrabdullah/xls-r-300-cv17-polish-adap-ru
badrabdullah
"2024-06-10T21:04:40Z"
0
0
transformers
[ "transformers", "safetensors", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice_17_0", "base_model:facebook/wav2vec2-xls-r-300m", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
"2024-06-10T10:20:52Z"
--- license: apache-2.0 base_model: facebook/wav2vec2-xls-r-300m tags: - generated_from_trainer datasets: - common_voice_17_0 metrics: - wer model-index: - name: xls-r-300-cv17-polish-adap-ru results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: common_voice_17_0 type: common_voice_17_0 config: pl split: validation args: pl metrics: - name: Wer type: wer value: 0.29855366457663735 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/badr-nlp/xlsr-continual-finetuning-polish/runs/x0030ten) # xls-r-300-cv17-polish-adap-ru This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice_17_0 dataset. It achieves the following results on the evaluation set: - Loss: 0.4087 - Wer: 0.2986 - Cer: 0.0652 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | Cer | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:| | 3.2673 | 1.6 | 100 | 3.3121 | 1.0 | 1.0 | | 1.2344 | 3.2 | 200 | 1.1417 | 0.8846 | 0.2502 | | 0.4279 | 4.8 | 300 | 0.4485 | 0.4848 | 0.1082 | | 0.2415 | 6.4 | 400 | 0.3752 | 0.3971 | 0.0871 | | 0.2634 | 8.0 | 500 | 0.4058 | 0.4148 | 0.0927 | | 0.1683 | 9.6 | 600 | 0.4079 | 0.3906 | 0.0887 | | 0.1356 | 11.2 | 700 | 0.4017 | 0.3927 | 0.0872 | | 0.0887 | 12.8 | 800 | 0.4094 | 0.3867 | 0.0874 | | 0.1529 | 14.4 | 900 | 0.4055 | 0.3728 | 0.0843 | | 0.1206 | 16.0 | 1000 | 0.4030 | 0.3709 | 0.0824 | | 0.0573 | 17.6 | 1100 | 0.4370 | 0.3787 | 0.0841 | | 0.073 | 19.2 | 1200 | 0.4157 | 0.3653 | 0.0819 | | 0.0498 | 20.8 | 1300 | 0.4235 | 0.3637 | 0.0811 | | 0.0987 | 22.4 | 1400 | 0.4153 | 0.3526 | 0.0786 | | 0.0791 | 24.0 | 1500 | 0.4239 | 0.3557 | 0.0802 | | 0.0698 | 25.6 | 1600 | 0.4253 | 0.3473 | 0.0779 | | 0.0745 | 27.2 | 1700 | 0.4092 | 0.3518 | 0.0784 | | 0.0689 | 28.8 | 1800 | 0.4326 | 0.3433 | 0.0764 | | 0.059 | 30.4 | 1900 | 0.4207 | 0.3342 | 0.0738 | | 0.0255 | 32.0 | 2000 | 0.4053 | 0.3272 | 0.0726 | | 0.0403 | 33.6 | 2100 | 0.4267 | 0.3264 | 0.0715 | | 0.0281 | 35.2 | 2200 | 0.4141 | 0.3250 | 0.0719 | | 0.0533 | 36.8 | 2300 | 0.4242 | 0.3252 | 0.0718 | | 0.0503 | 38.4 | 2400 | 0.4062 | 0.3147 | 0.0690 | | 0.0292 | 40.0 | 2500 | 0.4109 | 0.3081 | 0.0676 | | 0.0276 | 41.6 | 2600 | 0.3919 | 0.3044 | 0.0665 | | 0.0177 | 43.2 | 2700 | 0.4104 | 0.3038 | 0.0664 | | 0.0268 | 44.8 | 2800 | 0.4149 | 0.3040 | 0.0662 | | 0.0388 | 46.4 | 2900 | 0.4090 | 0.3003 | 0.0656 | | 0.0193 | 48.0 | 3000 | 0.4092 | 0.2994 | 0.0652 | | 0.0428 | 49.6 | 3100 | 0.4087 | 0.2986 | 0.0652 | ### Framework versions - Transformers 4.42.0.dev0 - Pytorch 2.3.1+cu121 - Datasets 2.19.2 - Tokenizers 0.19.1
Arunima693/llama-2-7b-mlabonne-enhanced
Arunima693
"2024-06-10T10:33:21Z"
0
0
null
[ "region:us" ]
null
"2024-06-10T10:22:04Z"
Entry not found
palashdandge/My-Voice
palashdandge
"2024-06-10T10:40:25Z"
0
0
null
[ "region:us" ]
null
"2024-06-10T10:24:00Z"
Entry not found
AshiqaSameem/mistral_biology_summarizer_model
AshiqaSameem
"2024-06-10T10:30:56Z"
0
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "en", "base_model:unsloth/mistral-7b-v0.3-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2024-06-10T10:25:30Z"
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - mistral - trl - sft base_model: unsloth/mistral-7b-v0.3-bnb-4bit --- # Uploaded model - **Developed by:** AshiqaSameem - **License:** apache-2.0 - **Finetuned from model :** unsloth/mistral-7b-v0.3-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
prasenjeet99/Samar_1_aib
prasenjeet99
"2024-06-10T10:25:52Z"
0
0
null
[ "license:apache-2.0", "region:us" ]
null
"2024-06-10T10:25:52Z"
--- license: apache-2.0 ---
davidkim205/hades-7b-sft-lora
davidkim205
"2024-06-11T10:53:23Z"
0
0
transformers
[ "transformers", "safetensors", "text-generation", "conversational", "ko", "en", "endpoints_compatible", "region:us" ]
text-generation
"2024-06-10T10:26:57Z"
--- library_name: transformers language: - ko - en pipeline_tag: text-generation --- # Hades-7b ## Model Details Hades-7b is a sophisticated text generation AI model developed by 2digit specifically for the purpose of news analysis. Leveraging advanced natural language processing techniques, Hades-7b is capable of extracting a wide range of information from news articles. Key functionalities of this model include: 1. **Category Identification**: Hades-7b can classify news articles into various predefined categories. This helps in organizing news content and makes it easier for users to find articles related to specific topics of interest. 2. **Company Name Extraction**: The model can identify and extract the names of companies mentioned in news articles. This feature is particularly useful for financial analysis, where tracking mentions of companies in the media can provide insights into market sentiment and potential stock movements. 3. **Stock-Related Themes**: Hades-7b is adept at recognizing themes and topics related to the stock market. This includes identifying news about market trends, investment strategies, regulatory changes, and other stock-related content. By categorizing news articles based on these themes, the model helps analysts and investors stay informed about relevant market developments. 4. **Keyword Extraction**: The model can pinpoint key keywords and phrases within a news article. These keywords summarize the main points of the article, making it easier for users to quickly grasp the content without reading the entire text. This feature enhances the efficiency of news consumption, especially in fast-paced environments where time is of the essence. Overall, Hades-7b is a powerful tool for anyone involved in news analysis, from financial analysts and journalists to market researchers and investors. By automating the extraction of critical information from news articles, Hades-7b streamlines the process of news analysis and helps users make more informed decisions based on up-to-date information. ## License Use of this model requires company approval. Please contact AI@2digit.io. For more details, please refer to the website below: https://2digit.io/#contactus ## Dataset The model was trained on an internal dataset from 2digit, consisting of 157k dataset. | task | size | ratio | description | | --------- | ------: | ----: | ----------------------------------------------- | | theme | 5,766 | 3.7% | Human-labeled theme stock dataset | | keyword | 23,234 | 14.8% | Human-labeled main and related keyword datasets | | category | 24,605 | 15.6% | Human labeling of 10 categories | | stockname | 103,643 | 65.9% | Human-labeled stockname datasets | ## Evaluation We measured model accuracy through an internal evaluation system. | task | accuracy | description | | --------- | -------: | ------------------------------------ | | theme | 0.66 | Extract themes and related companies | | keyword | 0.40 | Extract keywords and keyword type | | category | 0.88 | News category classification | | stockname | 0.90 | Extract companies |
erwanv/test-de-model
erwanv
"2024-06-10T10:27:20Z"
0
0
null
[ "license:apache-2.0", "region:us" ]
null
"2024-06-10T10:27:20Z"
--- license: apache-2.0 ---
zJuu/Qwen-Qwen2-0.5B-1718015356
zJuu
"2024-06-10T10:29:49Z"
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-06-10T10:29:17Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
zJuu/Qwen-Qwen2-1.5B-1718015460
zJuu
"2024-06-10T10:32:28Z"
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-06-10T10:31:01Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
tenzin3/phi3-mini-128k
tenzin3
"2024-06-10T11:43:10Z"
0
0
transformers
[ "transformers", "safetensors", "phi3", "text-generation", "nlp", "code", "conversational", "custom_code", "en", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2024-06-10T10:31:28Z"
--- license: mit license_link: https://huggingface.co/microsoft/Phi-3-mini-128k-instruct/resolve/main/LICENSE language: - en pipeline_tag: text-generation tags: - nlp - code widget: - messages: - role: user content: Can you provide ways to eat combinations of bananas and dragonfruits? --- ## Model Summary The Phi-3-Mini-128K-Instruct is a 3.8 billion-parameter, lightweight, state-of-the-art open model trained using the Phi-3 datasets. This dataset includes both synthetic data and filtered publicly available website data, with an emphasis on high-quality and reasoning-dense properties. The model belongs to the Phi-3 family with the Mini version in two variants [4K](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) and [128K](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) which is the context length (in tokens) that it can support. After initial training, the model underwent a post-training process that involved supervised fine-tuning and direct preference optimization to enhance its ability to follow instructions and adhere to safety measures. When evaluated against benchmarks that test common sense, language understanding, mathematics, coding, long-term context, and logical reasoning, the Phi-3 Mini-128K-Instruct demonstrated robust and state-of-the-art performance among models with fewer than 13 billion parameters. Resources and Technical Documentation: + [Phi-3 Microsoft Blog](https://aka.ms/Phi-3Build2024) + [Phi-3 Technical Report](https://aka.ms/phi3-tech-report) + [Phi-3 on Azure AI Studio](https://aka.ms/phi3-azure-ai) + [Phi-3 Cookbook](https://github.com/microsoft/Phi-3CookBook) | | Short Context | Long Context | | ------- | ------------- | ------------ | | Mini | 4K [[HF]](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct-onnx) ; [[GGUF]](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct-gguf) | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct-onnx)| | Small | 8K [[HF]](https://huggingface.co/microsoft/Phi-3-small-8k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-small-8k-instruct-onnx-cuda) | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-small-128k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-small-128k-instruct-onnx-cuda)| | Medium | 4K [[HF]](https://huggingface.co/microsoft/Phi-3-medium-4k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-medium-4k-instruct-onnx-cuda) | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct-onnx-cuda)| | Vision | | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-vision-128k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-vision-128k-instruct-onnx-cuda)| ## Intended Uses **Primary use cases** The model is intended for commercial and research use in English. The model provides uses for applications which require: 1) Memory/compute constrained environments 2) Latency bound scenarios 3) Strong reasoning (especially code, math and logic) Our model is designed to accelerate research on language and multimodal models, for use as a building block for generative AI powered features. **Use case considerations** Our models are not specifically designed or evaluated for all downstream purposes. Developers should consider common limitations of language models as they select use cases, and evaluate and mitigate for accuracy, safety, and fariness before using within a specific downstream use case, particularly for high risk scenarios. Developers should be aware of and adhere to applicable laws or regulations (including privacy, trade compliance laws, etc.) that are relevant to their use case. Nothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the license the model is released under. ## How to Use Phi-3 Mini-128K-Instruct has been integrated in the development version (4.41.0.dev0) of `transformers`. Until the official version is released through `pip`, ensure that you are doing one of the following: * When loading the model, ensure that `trust_remote_code=True` is passed as an argument of the `from_pretrained()` function. * Update your local `transformers` to the development version: `pip uninstall -y transformers && pip install git+https://github.com/huggingface/transformers`. The previous command is an alternative to cloning and installing from the source. The current `transformers` version can be verified with: `pip list | grep transformers`. ### Tokenizer Phi-3 Mini-128K-Instruct supports a vocabulary size of up to `32064` tokens. The [tokenizer files](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct/blob/main/added_tokens.json) already provide placeholder tokens that can be used for downstream fine-tuning, but they can also be extended up to the model's vocabulary size. ### Chat Format Given the nature of the training data, the Phi-3 Mini-128K-Instruct model is best suited for prompts using the chat format as follows. You can provide the prompt as a question with a generic template as follow: ```markdown <|user|>\nQuestion<|end|>\n<|assistant|> ``` For example: ```markdown <|user|> How to explain Internet for a medieval knight?<|end|> <|assistant|> ``` where the model generates the text after `<|assistant|>`. In case of few-shots prompt, the prompt can be formatted as the following: ```markdown <|user|> I am going to Paris, what should I see?<|end|> <|assistant|> Paris, the capital of France, is known for its stunning architecture, art museums, historical landmarks, and romantic atmosphere. Here are some of the top attractions to see in Paris:\n\n1. The Eiffel Tower: The iconic Eiffel Tower is one of the most recognizable landmarks in the world and offers breathtaking views of the city.\n2. The Louvre Museum: The Louvre is one of the world's largest and most famous museums, housing an impressive collection of art and artifacts, including the Mona Lisa.\n3. Notre-Dame Cathedral: This beautiful cathedral is one of the most famous landmarks in Paris and is known for its Gothic architecture and stunning stained glass windows.\n\nThese are just a few of the many attractions that Paris has to offer. With so much to see and do, it's no wonder that Paris is one of the most popular tourist destinations in the world."<|end|> <|user|> What is so great about #1?<|end|> <|assistant|> ``` ### Sample inference code This code snippets show how to get quickly started with running the model on a GPU: ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline torch.random.manual_seed(0) model = AutoModelForCausalLM.from_pretrained( "microsoft/Phi-3-mini-128k-instruct", device_map="cuda", torch_dtype="auto", trust_remote_code=True, ) tokenizer = AutoTokenizer.from_pretrained("microsoft/Phi-3-mini-128k-instruct") messages = [ {"role": "user", "content": "Can you provide ways to eat combinations of bananas and dragonfruits?"}, {"role": "assistant", "content": "Sure! Here are some ways to eat bananas and dragonfruits together: 1. Banana and dragonfruit smoothie: Blend bananas and dragonfruits together with some milk and honey. 2. Banana and dragonfruit salad: Mix sliced bananas and dragonfruits together with some lemon juice and honey."}, {"role": "user", "content": "What about solving an 2x + 3 = 7 equation?"}, ] pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, ) generation_args = { "max_new_tokens": 500, "return_full_text": False, "temperature": 0.0, "do_sample": False, } output = pipe(messages, **generation_args) print(output[0]['generated_text']) ``` *Some applications/frameworks might not include a BOS token (`<s>`) at the start of the conversation. Please ensure that it is included since it provides more reliable results.* ## Responsible AI Considerations Like other language models, the Phi series models can potentially behave in ways that are unfair, unreliable, or offensive. Some of the limiting behaviors to be aware of include: + Quality of Service: the Phi models are trained primarily on English text. Languages other than English will experience worse performance. English language varieties with less representation in the training data might experience worse performance than standard American English. + Representation of Harms & Perpetuation of Stereotypes: These models can over- or under-represent groups of people, erase representation of some groups, or reinforce demeaning or negative stereotypes. Despite safety post-training, these limitations may still be present due to differing levels of representation of different groups or prevalence of examples of negative stereotypes in training data that reflect real-world patterns and societal biases. + Inappropriate or Offensive Content: these models may produce other types of inappropriate or offensive content, which may make it inappropriate to deploy for sensitive contexts without additional mitigations that are specific to the use case. + Information Reliability: Language models can generate nonsensical content or fabricate content that might sound reasonable but is inaccurate or outdated. + Limited Scope for Code: Majority of Phi-3 training data is based in Python and use common packages such as "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses. Developers should apply responsible AI best practices and are responsible for ensuring that a specific use case complies with relevant laws and regulations (e.g. privacy, trade, etc.). Important areas for consideration include: + Allocation: Models may not be suitable for scenarios that could have consequential impact on legal status or the allocation of resources or life opportunities (ex: housing, employment, credit, etc.) without further assessments and additional debiasing techniques. + High-Risk Scenarios: Developers should assess suitability of using models in high-risk scenarios where unfair, unreliable or offensive outputs might be extremely costly or lead to harm. This includes providing advice in sensitive or expert domains where accuracy and reliability are critical (ex: legal or health advice). Additional safeguards should be implemented at the application level according to the deployment context. + Misinformation: Models may produce inaccurate information. Developers should follow transparency best practices and inform end-users they are interacting with an AI system. At the application level, developers can build feedback mechanisms and pipelines to ground responses in use-case specific, contextual information, a technique known as Retrieval Augmented Generation (RAG). + Generation of Harmful Content: Developers should assess outputs for their context and use available safety classifiers or custom solutions appropriate for their use case. + Misuse: Other forms of misuse such as fraud, spam, or malware production may be possible, and developers should ensure that their applications do not violate applicable laws and regulations. ## Training ### Model * Architecture: Phi-3 Mini-128K-Instruct has 3.8B parameters and is a dense decoder-only Transformer model. The model is fine-tuned with Supervised fine-tuning (SFT) and Direct Preference Optimization (DPO) to ensure alignment with human preferences and safety guidlines. * Inputs: Text. It is best suited for prompts using chat format. * Context length: 128K tokens * GPUs: 512 H100-80G * Training time: 7 days * Training data: 3.3T tokens * Outputs: Generated text in response to the input * Dates: Our models were trained between February and April 2024 * Status: This is a static model trained on an offline dataset with cutoff date October 2023. Future versions of the tuned models may be released as we improve models. ### Datasets Our training data includes a wide variety of sources, totaling 3.3 trillion tokens, and is a combination of 1) Publicly available documents filtered rigorously for quality, selected high-quality educational data, and code; 2) Newly created synthetic, “textbook-like” data for the purpose of teaching math, coding, common sense reasoning, general knowledge of the world (science, daily activities, theory of mind, etc.); 3) High quality chat format supervised data covering various topics to reflect human preferences on different aspects such as instruct-following, truthfulness, honesty and helpfulness. ### Fine-tuning A basic example of multi-GPUs supervised fine-tuning (SFT) with TRL and Accelerate modules is provided [here](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct/resolve/main/sample_finetune.py). ## Benchmarks We report the results for Phi-3-Mini-128K-Instruct on standard open-source benchmarks measuring the model's reasoning ability (both common sense reasoning and logical reasoning). We compare to Phi-2, Mistral-7b-v0.1, Mixtral-8x7b, Gemma 7B, Llama-3-8B-Instruct, and GPT-3.5. All the reported numbers are produced with the exact same pipeline to ensure that the numbers are comparable. These numbers might differ from other published numbers due to slightly different choices in the evaluation. As is now standard, we use few-shot prompts to evaluate the models, at temperature 0. The prompts and number of shots are part of a Microsoft internal tool to evaluate language models, and in particular we did no optimization to the pipeline for Phi-3. More specifically, we do not change prompts, pick different few-shot examples, change prompt format, or do any other form of optimization for the model. The number of k–shot examples is listed per-benchmark. | | Phi-3-Mini-128K-In<br>3.8b | Phi-3-Small<br>7b (preview) | Phi-3-Medium<br>14b (preview) | Phi-2<br>2.7b | Mistral<br>7b | Gemma<br>7b | Llama-3-In<br>8b | Mixtral<br>8x7b | GPT-3.5<br>version 1106 | |---|---|---|---|---|---|---|---|---|---| | MMLU <br>5-Shot | 68.1 | 75.3 | 78.2 | 56.3 | 61.7 | 63.6 | 66.5 | 68.4 | 71.4 | | HellaSwag <br> 5-Shot | 74.5 | 78.7 | 83.2 | 53.6 | 58.5 | 49.8 | 71.1 | 70.4 | 78.8 | | ANLI <br> 7-Shot | 52.8 | 55.0 | 58.7 | 42.5 | 47.1 | 48.7 | 57.3 | 55.2 | 58.1 | | GSM-8K <br> 0-Shot; CoT | 83.6 | 86.4 | 90.8 | 61.1 | 46.4 | 59.8 | 77.4 | 64.7 | 78.1 | | MedQA <br> 2-Shot | 55.3 | 58.2 | 69.8 | 40.9 | 49.6 | 50.0 | 60.5 | 62.2 | 63.4 | | AGIEval <br> 0-Shot | 36.9 | 45.0 | 49.7 | 29.8 | 35.1 | 42.1 | 42.0 | 45.2 | 48.4 | | TriviaQA <br> 5-Shot | 57.1 | 59.1 | 73.3 | 45.2 | 72.3 | 75.2 | 67.7 | 82.2 | 85.8 | | Arc-C <br> 10-Shot | 84.0 | 90.7 | 91.9 | 75.9 | 78.6 | 78.3 | 82.8 | 87.3 | 87.4 | | Arc-E <br> 10-Shot | 95.2 | 97.1 | 98.0 | 88.5 | 90.6 | 91.4 | 93.4 | 95.6 | 96.3 | | PIQA <br> 5-Shot | 83.6 | 87.8 | 88.2 | 60.2 | 77.7 | 78.1 | 75.7 | 86.0 | 86.6 | | SociQA <br> 5-Shot | 76.1 | 79.0 | 79.4 | 68.3 | 74.6 | 65.5 | 73.9 | 75.9 | 68.3 | | BigBench-Hard <br> 0-Shot | 71.5 | 75.0 | 82.5 | 59.4 | 57.3 | 59.6 | 51.5 | 69.7 | 68.32 | | WinoGrande <br> 5-Shot | 72.5 | 82.5 | 81.2 | 54.7 | 54.2 | 55.6 | 65.0 | 62.0 | 68.8 | | OpenBookQA <br> 10-Shot | 80.6 | 88.4 | 86.6 | 73.6 | 79.8 | 78.6 | 82.6 | 85.8 | 86.0 | | BoolQ <br> 0-Shot | 78.7 | 82.9 | 86.5 | -- | 72.2 | 66.0 | 80.9 | 77.6 | 79.1 | | CommonSenseQA <br> 10-Shot | 78.0 | 80.3 | 82.6 | 69.3 | 72.6 | 76.2 | 79 | 78.1 | 79.6 | | TruthfulQA <br> 10-Shot | 63.2 | 68.1 | 74.8 | -- | 52.1 | 53.0 | 63.2 | 60.1 | 85.8 | | HumanEval <br> 0-Shot | 57.9 | 59.1 | 54.7 | 47.0 | 28.0 | 34.1 | 60.4| 37.8 | 62.2 | | MBPP <br> 3-Shot | 62.5 | 71.4 | 73.7 | 60.6 | 50.8 | 51.5 | 67.7 | 60.2 | 77.8 | ## Software * [PyTorch](https://github.com/pytorch/pytorch) * [DeepSpeed](https://github.com/microsoft/DeepSpeed) * [Transformers](https://github.com/huggingface/transformers) * [Flash-Attention](https://github.com/HazyResearch/flash-attention) ## Hardware Note that by default, the Phi-3-mini model uses flash attention, which requires certain types of GPU hardware to run. We have tested on the following GPU types: * NVIDIA A100 * NVIDIA A6000 * NVIDIA H100 If you want to run the model on: * NVIDIA V100 or earlier generation GPUs: call AutoModelForCausalLM.from_pretrained() with attn_implementation="eager" * Optimized inference on GPU, CPU, and Mobile: use the **ONNX** models [128K](https://aka.ms/phi3-mini-128k-instruct-onnx) ## Cross Platform Support ONNX runtime ecosystem now supports Phi-3 Mini models across platforms and hardware. You can find the optimized Phi-3 Mini-128K-Instruct ONNX model [here](https://aka.ms/phi3-mini-128k-instruct-onnx). Optimized Phi-3 models are also published here in ONNX format, to run with ONNX Runtime on CPU and GPU across devices, including server platforms, Windows, Linux and Mac desktops, and mobile CPUs, with the precision best suited to each of these targets. DirectML support lets developers bring hardware acceleration to Windows devices at scale across AMD, Intel, and NVIDIA GPUs. Along with DirectML, ONNX Runtime provides cross platform support for Phi-3 across a range of devices CPU, GPU, and mobile. Here are some of the optimized configurations we have added: 1. ONNX models for int4 DML: Quantized to int4 via AWQ 2. ONNX model for fp16 CUDA 3. ONNX model for int4 CUDA: Quantized to int4 via RTN 4. ONNX model for int4 CPU and Mobile: Quantized to int4 via RTN ## License The model is licensed under the [MIT license](https://huggingface.co/microsoft/Phi-3-mini-128k/resolve/main/LICENSE). ## Trademarks This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow [Microsoft’s Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks). Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies.
yturkunov/cifar10_vit16_lora
yturkunov
"2024-06-10T12:21:01Z"
0
0
transformers
[ "transformers", "safetensors", "vit", "cifar10", "image classification", "image-classification", "en", "dataset:uoft-cs/cifar10", "license:apache-2.0", "endpoints_compatible", "region:us" ]
image-classification
"2024-06-10T10:32:07Z"
--- library_name: transformers tags: - vit - cifar10 - image classification license: apache-2.0 datasets: - uoft-cs/cifar10 language: - en metrics: - accuracy - perplexity pipeline_tag: image-classification widget: - src: ./deer_224x224.png example_title: deer 224x224 image example --- ## Model Details ### Model Description An adapter for the [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) ViT trained on CIFAR10 classification task ## Loading guide ```py from transformers import AutoModelForImageClassification labels2title = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck'] model = AutoModelForImageClassification.from_pretrained( 'google/vit-base-patch16-224-in21k', num_labels=len(labels2title), id2label={i: c for i, c in enumerate(labels2title)}, label2id={c: i for i, c in enumerate(labels2title)} ) model.load_adapter("yturkunov/cifar10_vit16_lora") ``` ## Learning curves ![image/png](https://cdn-uploads.huggingface.co/production/uploads/655221be7bd4634260e032ca/Ji1ewA_8T1rJuQkdNCIXQ.png) ### Recommendations to input The model expects an image that has went through the following preprocessing stages: * Scaling range: <img src="https://latex.codecogs.com/gif.latex?[0, 255]\rightarrow[0, 1]" /> * Normalization parameters: <img src="https://latex.codecogs.com/gif.latex?\mu=(.5,.5,.5),\sigma=(.5,.5,.5)" /> * Dimensions: 224x224 * Number of channels: 3 ### Inference on 3x4 random sample ![image/png](https://cdn-uploads.huggingface.co/production/uploads/655221be7bd4634260e032ca/zxj9ID37gJJnkmc8Sl97A.png)
Praneethkeerthi/my_awesome_model
Praneethkeerthi
"2024-06-10T10:33:55Z"
0
0
null
[ "region:us" ]
null
"2024-06-10T10:33:55Z"
Entry not found
KubLuk/my_awesome_model
KubLuk
"2024-06-10T10:34:13Z"
0
0
null
[ "region:us" ]
null
"2024-06-10T10:34:13Z"
Entry not found
bharathn97/chatbot
bharathn97
"2024-06-10T10:34:53Z"
0
0
null
[ "license:unlicense", "region:us" ]
null
"2024-06-10T10:34:53Z"
--- license: unlicense ---
Gaysa/Lebedin-skiy
Gaysa
"2024-06-10T10:38:55Z"
0
0
null
[ "region:us" ]
null
"2024-06-10T10:36:40Z"
Entry not found
IhorP/girlOne
IhorP
"2024-06-10T10:37:15Z"
0
0
null
[ "region:us" ]
null
"2024-06-10T10:37:15Z"
Entry not found
Oslaw/bone_fracture_model
Oslaw
"2024-06-10T12:05:37Z"
0
0
null
[ "tensorboard", "safetensors", "region:us" ]
null
"2024-06-10T10:39:51Z"
Entry not found
Srihitha2005/llama2-qlora-finetunined-french
Srihitha2005
"2024-06-10T11:44:47Z"
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-06-10T10:41:09Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
sammy1781/HOPE1
sammy1781
"2024-06-10T10:41:12Z"
0
0
null
[ "region:us" ]
null
"2024-06-10T10:41:12Z"
Entry not found
caiyufan/result
caiyufan
"2024-06-10T10:43:22Z"
0
0
null
[ "region:us" ]
null
"2024-06-10T10:43:22Z"
Entry not found
zJuu/Qwen-Qwen2-0.5B-1718016303
zJuu
"2024-06-10T10:45:35Z"
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-06-10T10:45:04Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
voxelo/splats
voxelo
"2024-06-29T12:36:44Z"
0
0
null
[ "license:other", "region:us" ]
null
"2024-06-10T10:45:08Z"
--- license: other license_name: voxelo license_link: LICENSE ---
zJuu/Qwen-Qwen2-1.5B-1718016404
zJuu
"2024-06-10T10:48:06Z"
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-06-10T10:46:45Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
InderV94/lora_model
InderV94
"2024-06-10T10:48:17Z"
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "gemma", "trl", "en", "base_model:unsloth/gemma-2b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-10T10:48:06Z"
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - gemma - trl base_model: unsloth/gemma-2b-bnb-4bit --- # Uploaded model - **Developed by:** InderV94 - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-2b-bnb-4bit This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Veture/quantized_model_m3
Veture
"2024-06-10T15:24:50Z"
0
0
null
[ "region:us" ]
null
"2024-06-10T10:52:28Z"
Entry not found
mailmail85/modelname1
mailmail85
"2024-06-10T10:52:44Z"
0
0
null
[ "region:us" ]
null
"2024-06-10T10:52:44Z"
Entry not found
nikhil928/google-flan-t5-large-770-finetuned-medical-data
nikhil928
"2024-06-10T10:55:31Z"
0
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text2text-generation
"2024-06-10T10:54:37Z"
Entry not found
alexgrigore/videomae-base-finetuned-good-gesturePhaseV12
alexgrigore
"2024-06-10T11:19:22Z"
0
0
transformers
[ "transformers", "safetensors", "videomae", "video-classification", "generated_from_trainer", "base_model:MCG-NJU/videomae-base", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
video-classification
"2024-06-10T10:55:10Z"
--- license: cc-by-nc-4.0 base_model: MCG-NJU/videomae-base tags: - generated_from_trainer metrics: - accuracy model-index: - name: videomae-base-finetuned-good-gesturePhaseV12 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # videomae-base-finetuned-good-gesturePhaseV12 This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset. It achieves the following results on the evaluation set: - Accuracy: 0.9544 - Loss: 0.2487 - Accuracy Hold: 1.0 - Accuracy Stroke: 0.4286 - Accuracy Recovery: 0.8947 - Accuracy Preparation: 0.9811 - Accuracy Unknown: 0.9286 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 630 ### Training results | Training Loss | Epoch | Step | Accuracy | Validation Loss | Accuracy Hold | Accuracy Stroke | Accuracy Recovery | Accuracy Preparation | Accuracy Unknown | |:-------------:|:------:|:----:|:--------:|:---------------:|:-------------:|:---------------:|:-----------------:|:--------------------:|:----------------:| | 1.1508 | 0.2016 | 127 | 0.6900 | 1.0099 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | | 0.7497 | 1.2016 | 254 | 0.7249 | 0.7448 | 0.3077 | 0.0 | 0.0 | 1.0 | 0.0 | | 0.3044 | 2.2016 | 381 | 0.8603 | 0.4170 | 0.7692 | 0.0 | 0.5882 | 0.9620 | 0.6818 | | 0.1617 | 3.2016 | 508 | 0.9127 | 0.3627 | 0.7308 | 0.1667 | 0.8824 | 0.9810 | 0.8636 | | 0.0765 | 4.1937 | 630 | 0.9432 | 0.2175 | 0.8462 | 0.6667 | 0.8824 | 0.9747 | 0.9545 | ### Framework versions - Transformers 4.41.1 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
MaxwellWu/BERT_imdb
MaxwellWu
"2024-06-10T22:35:41Z"
0
0
null
[ "region:us" ]
null
"2024-06-10T10:55:27Z"
Entry not found
Ninja20o0/vector-db
Ninja20o0
"2024-06-10T10:57:33Z"
0
0
null
[ "region:us" ]
null
"2024-06-10T10:57:33Z"
Entry not found
zJuu/Qwen-Qwen2-0.5B-1718017271
zJuu
"2024-06-10T11:02:09Z"
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-06-10T11:01:38Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
RandomNameAnd6/Phi-3-Mini-Dhar-Mann-Adapters-BOS
RandomNameAnd6
"2024-06-10T11:01:53Z"
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "mistral", "trl", "en", "base_model:unsloth/Phi-3-medium-4k-instruct-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-10T11:01:42Z"
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - mistral - trl base_model: unsloth/Phi-3-medium-4k-instruct-bnb-4bit --- # Uploaded model - **Developed by:** RandomNameAnd6 - **License:** apache-2.0 - **Finetuned from model :** unsloth/Phi-3-medium-4k-instruct-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
sedalti99/distilbert-base-uncased-finetuned-sequence
sedalti99
"2024-06-10T11:02:53Z"
0
0
null
[ "region:us" ]
null
"2024-06-10T11:02:53Z"
Entry not found
Mihirh19/neural_network_from_numpy
Mihirh19
"2024-06-10T11:03:22Z"
0
0
null
[ "region:us" ]
null
"2024-06-10T11:03:22Z"
Entry not found
VatsalPatel18/OmicsClip
VatsalPatel18
"2024-06-10T13:02:55Z"
0
0
transformers
[ "transformers", "pytorch", "clip", "license:cc-by-nc-sa-4.0", "endpoints_compatible", "region:us" ]
null
"2024-06-10T11:05:03Z"
--- license: cc-by-nc-sa-4.0 ---
diabolic6045/Sanskrit-llama
diabolic6045
"2024-06-11T11:26:58Z"
0
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "dataset:diabolic6045/Sanskrit-llama", "base_model:meta-llama/Meta-Llama-3-8B", "license:llama3", "4-bit", "bitsandbytes", "region:us" ]
null
"2024-06-10T11:05:30Z"
--- license: llama3 library_name: peft tags: - axolotl - generated_from_trainer base_model: meta-llama/Meta-Llama-3-8B model-index: - name: Sanskrit-llama results: [] datasets: - diabolic6045/Sanskrit-llama --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml base_model: meta-llama/Meta-Llama-3-8B model_type: AutoModelForCausalLM tokenizer_type: AutoTokenizer max_steps: 2 bnb_config_kwargs: llm_int8_has_fp16_weight: false bnb_4bit_quant_type: nf4 bnb_4bit_use_double_quant: true load_in_8bit: false load_in_4bit: true strict: false datasets: - path: diabolic6045/Sanskrit-llama type: alpaca dataset_prepared_path: val_set_size: 0 output_dir: ./outputs/qlora-out chat_template: chatml hub_model_id: diabolic6045/Sanskrit-llama hf_use_auth_token: true adapter: qlora lora_model_dir: sequence_len: 1024 sample_packing: true eval_sample_packing: false pad_to_sequence_len: true lora_r: 32 lora_alpha: 16 lora_dropout: 0.05 lora_target_modules: lora_target_linear: true lora_fan_in_fan_out: wandb_project: संस्कृतम्-llama wandb_entity: wandb_watch: all wandb_name: संस्कृतम्-llama wandb_log_model: gradient_accumulation_steps: 4 micro_batch_size: 2 num_epochs: 1 optimizer: paged_adamw_8bit lr_scheduler: cosine cosine_min_lr_ratio: 0.2 learning_rate: 2e-5 train_on_inputs: false group_by_length: false bf16: false fp16: tf32: false gradient_checkpointing: true early_stopping_patience: resume_from_checkpoint: local_rank: logging_steps: 1 xformers_attention: flash_attention: false warmup_steps: 10 evals_per_epoch: 4 saves_per_epoch: 1 debug: deepspeed: weight_decay: 0.0 #fsdp: # - full_shard # - auto_wrap #fsdp_config: # fsdp_limit_all_gathers: true # fsdp_sync_module_states: true # fsdp_offload_params: true # fsdp_use_orig_params: false # fsdp_cpu_ram_efficient_loading: true # fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP # fsdp_transformer_layer_cls_to_wrap: LlamaDecoderLayer # fsdp_state_dict_type: FULL_STATE_DICT special_tokens: pad_token: "<|end_of_text|>" ``` </details><br> # Sanskrit-llama This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - distributed_type: multi-GPU - num_devices: 2 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - total_eval_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 2 ### Training results ### Framework versions - PEFT 0.11.1 - Transformers 4.41.1 - Pytorch 2.1.2 - Datasets 2.19.1 - Tokenizers 0.19.1
zJuu/Qwen-Qwen2-1.5B-1718017540
zJuu
"2024-06-10T11:07:36Z"
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-06-10T11:06:14Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Mayaagr/finance_tagging
Mayaagr
"2024-06-10T11:07:07Z"
0
0
null
[ "region:us" ]
null
"2024-06-10T11:07:06Z"
Entry not found
modeliaai/neck_segmenter
modeliaai
"2024-06-16T15:50:30Z"
0
0
null
[ "region:us" ]
null
"2024-06-10T11:10:52Z"
Entry not found
LiangRenjie/CLIP_RVMR
LiangRenjie
"2024-06-10T11:11:35Z"
0
0
null
[ "license:mit", "region:us" ]
null
"2024-06-10T11:11:35Z"
--- license: mit ---
Apk02/Itrnsp_Flanv2_FT_Llama2_APK
Apk02
"2024-06-10T11:32:37Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-06-10T11:11:43Z"
--- license: apache-2.0 ---
mayssakorbi/small_dataset
mayssakorbi
"2024-06-10T11:13:36Z"
0
0
null
[ "tensorboard", "safetensors", "region:us" ]
null
"2024-06-10T11:12:05Z"
Entry not found
zJuu/Qwen-Qwen2-7B-1718017967
zJuu
"2024-06-10T11:15:44Z"
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-06-10T11:13:21Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
priya1995/S_1
priya1995
"2024-06-10T11:13:58Z"
0
0
null
[ "license:apache-2.0", "region:us" ]
null
"2024-06-10T11:13:58Z"
--- license: apache-2.0 ---
psiborgtechnologies/smart-ev-charging-management-system
psiborgtechnologies
"2024-06-10T11:15:14Z"
0
0
null
[ "region:us" ]
null
"2024-06-10T11:14:03Z"
iamwaleedshabbir/humandetector
iamwaleedshabbir
"2024-06-10T11:14:37Z"
0
0
null
[ "region:us" ]
null
"2024-06-10T11:14:37Z"
Entry not found
Equinox391/qwen1.5-llm
Equinox391
"2024-06-10T11:14:56Z"
0
0
null
[ "region:us" ]
null
"2024-06-10T11:14:56Z"
Entry not found
VatsalPatel18/phi3-mini-4k-WeatherBot-int4-gguf
VatsalPatel18
"2024-06-10T12:17:05Z"
0
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-06-10T11:17:01Z"
--- license: cc-by-nc-sa-4.0 ---
weitung1121/code-llama-7b-text-to-sql
weitung1121
"2024-06-10T11:17:21Z"
0
0
null
[ "region:us" ]
null
"2024-06-10T11:17:21Z"
Entry not found
ipappify/ipa-v3-20240610-1024e10t6d10a0-alignment
ipappify
"2024-06-10T11:35:50Z"
0
0
transformers
[ "transformers", "safetensors", "ipt-translator-v3", "en", "de", "fr", "endpoints_compatible", "region:us" ]
null
"2024-06-10T11:17:47Z"
--- language: - en - de - fr --- trained alignment head for **ipappify/ipt-v3-20240610-1024e10t6d10-ft_split_0001** on **ipappify/ipt-2048-aligned-md-rp24k**
naveenreddy/q-FrozenLake-v1-4x4-noSlippery
naveenreddy
"2024-06-10T11:23:04Z"
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
"2024-06-10T11:17:56Z"
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.52 +/- 2.74 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="naveenreddy/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
PureZenCapsule/PureZen
PureZenCapsule
"2024-06-10T11:22:17Z"
0
0
null
[ "license:apache-2.0", "region:us" ]
null
"2024-06-10T11:20:11Z"
--- license: apache-2.0 --- ما هو PureZen؟ PureZen كبسولات هي كبسولة بروستاتا متخصصة تم تركيبها لدعم صحة البروستاتا وتخفيف الأعراض المرتبطة بمشاكل غدة البروستاتا. مع تقدم الرجال في العمر، تصبح صحة البروستاتا ذات أهمية متزايدة، حيث يعاني العديد من الحالات مثل تضخم البروستاتا الحميد (BPH) أو التهاب البروستاتا. يهدف PureZen حبوب إلى توفير حل طبيعي وفعال لتعزيز صحة البروستاتا وتقليل الأعراض البولية وتعزيز الصحة العامة. الموقع الرسمي:<a href="https://www.nutritionsee.com/Pureunisi">www.PureZen.com</a> <p><a href="https://www.nutritionsee.com/Pureunisi"> <img src="https://www.nutritionsee.com/wp-content/uploads/2024/04/PureZen-Tunisia.png" alt="enter image description here"> </a></p> <a href="https://www.nutritionsee.com/Pureunisi">اشتري الآن!! انقر على الرابط أدناه لمزيد من المعلومات واحصل على خصم 50% الآن... أسرع</a> الموقع الرسمي:<a href="https://www.nutritionsee.com/Pureunisi">www.PureZen.com</a>
zamasW/pfe
zamasW
"2024-06-10T11:24:45Z"
0
0
null
[ "license:mit", "region:us" ]
null
"2024-06-10T11:24:45Z"
--- license: mit ---
MTWD/detr-resnet-50-brain-hack-v2
MTWD
"2024-06-10T11:59:18Z"
0
0
transformers
[ "transformers", "safetensors", "detr", "object-detection", "endpoints_compatible", "region:us" ]
object-detection
"2024-06-10T11:25:20Z"
Entry not found
ShapeKapseln33/FitSmartFat887
ShapeKapseln33
"2024-06-10T11:28:14Z"
0
0
null
[ "region:us" ]
null
"2024-06-10T11:26:01Z"
FitSmart France Fat Burner Commentaires Fitsmart Fat Burner Avis propose un mélange unique d'ingrédients soigneusement sélectionnés pour favoriser une combustion efficace des graisses. Des stimulants du métabolisme aux coupe-faim, chaque composant joue un rôle crucial pour aider les individus à atteindre leurs objectifs de perte de poids. La science derrière Fitsmart Fat Burner est fascinante. En ciblant des zones clés du corps, il améliore les processus métaboliques, permettant une dégradation plus efficace des graisses. **[Cliquez ici pour acheter maintenant sur le site officiel de FitSmart](https://kapseln24x7.com/fitsmart-fr)** Les objectifs de gestion du poids deviennent de plus en plus difficiles à mesure que nous vieillissons, simplement parce que le corps n'arrive pas à suivre. À cet égard, notre métabolisme ralentira probablement, réduisant les niveaux d’énergie et rendant difficile la combustion des graisses ou la perte des derniers kilos. Le plus gros problème que nous ressentons est de maintenir notre niveau d’énergie, car cela dicte notre humeur et nos performances physiques. L’ambiance est délicate et lorsque les choses empirent, les gens peuvent se retrouver à consommer leurs aliments préférés ou à être incapables de terminer une séance d’entraînement. Comment pouvons-nous aborder tous ces domaines simultanément, demandez-vous ? Voici où c'est le bon moment pour présenter ##Qu'est-ce que FitSmart™ ? FitSmart™ se présente comme un complément alimentaire pour brûler les graisses et gérer le poids. Il utilise uniquement des ingrédients naturels pour alimenter le corps, induisant finalement une perte efficace de poids et de graisse (en agissant sur les zones présentant une teneur maximale en graisse) tout en augmentant les niveaux d'énergie. Selon les créateurs, ces résultats sont possibles lorsque FitSmart™ est associé à un régime hypocalorique et à une certaine forme de mouvement. Notre équipe éditoriale a apprécié cette dernière recommandation, car une poignée de fournisseurs de suppléments donnent l’impression que la perte de poids peut être obtenue en apportant peu ou pas de changements à leurs habitudes de vie existantes. Pour mieux comprendre comment FitSmart™ peut soutenir les objectifs de poids et de graisse, passons directement aux ingrédients. ##Quels ingrédients FitSmart™ contient-il ? ##Les principaux ingrédients contenus dans FitSmart™ sont : ##Extrait de Framboise (150 mg) L'extrait de fruit de framboise (RBE), scientifiquement connu sous le nom de Rubus idaeus, est fabriqué à partir de framboises. Selon une source, cet ingrédient est connu pour sa teinte rouge et son profil antioxydant. Plus précisément, il contient des vitamines C et E, qui neutralisent les radicaux libres responsables du stress oxydatif et de l'inflammation, augmentant ainsi le risque d'obésité et d'autres affections classées sous le syndrome métabolique. Une étude de 2019 qui a examiné les effets de l'EBR sur le dysfonctionnement du tissu adipeux (ou des cellules adipeuses) a rapporté une réduction des ROS (espèces réactives de l'oxygène), une augmentation de la défense antioxydante (c'est-à-dire l'enzyme SOD, la catalase et la GPx) et une réduction. dans l’accumulation de graisse. Il régule également les marqueurs inflammatoires. En conséquence, les chercheurs ont fait valoir que l’EBR pourrait être un complément possible au traitement anti-obésité existant pour améliorer la fonction des graisses, un facteur essentiel au bon métabolisme, et donc diminuer la masse grasse et réduire l’inflammation. ##Vitamine B3 (16 mg) La vitamine B3 est l'une des 8 vitamines B dont la principale responsabilité est de créer les coenzymes NAD et NADP. Ce duo est impliqué dans plus de 400 réactions biochimiques, convertissant principalement les aliments en énergie utilisable. D’autres rôles tout aussi importants incluent le soutien du métabolisme cellulaire, la signalisation cellulaire, la création et la réparation de l’ADN et l’action en tant qu’agent antioxydant. En ce qui concerne les taux de graisse dans le sang, il a été démontré que la vitamine B3 augmente le bon cholestérol et réduit les taux de mauvais cholestérol et de triglycérides. Ensemble, cela pourrait impliquer une réduction du risque de développer une maladie cardiaque, mais cela n’a pas encore été prouvé. **[Cliquez ici pour acheter maintenant sur le site officiel de FitSmart](https://kapseln24x7.com/fitsmart-fr)** Néanmoins, une source a souligné qu'il pourrait y avoir d'autres façons dont la vitamine B3 pourrait être bénéfique, même s'il n'y a pas de lien direct entre ce nutriment et la perte de poids. Par exemple, cela pourrait aider à augmenter les niveaux d’énergie, aidant ainsi les gens pendant une séance d’entraînement. Il est également possible qu’une alimentation bien équilibrée contenant différentes vitamines B dans le cadre d’un régime hypocalorique puisse également être utile. Néanmoins, la vitamine B3 est un ingrédient dose-dépendant, nécessitant entre 1 000 et 3 000 mg pour une réelle différence. ##Extrait de feuille de thé vert (10 mg) L'extrait de feuilles de thé vert est une forme concentrée de thé vert fabriqué à partir de la plante Camellia sinensis. Comme les ingrédients mentionnés ci-dessus, c’est un excellent candidat pour le soutien antioxydant, qui à son tour peut limiter les effets du stress oxydatif. Jusqu’à présent, plusieurs études ont démontré comment le profil antioxydant de cet ingrédient pouvait diminuer l’inflammation, réguler la tension artérielle et inhiber l’absorption des graisses cellulaires. De plus, cela pourrait aider à réduire les niveaux de triglycérides ainsi que le cholestérol total et le mauvais cholestérol. Dans le contexte de la perte de poids, les effets combinés de la caféine et des catéchines (c’est-à-dire les antioxydants) généreraient la thermogenèse, un processus par lequel le corps brûle des calories pour digérer les aliments et produire de l’énergie. chaleur. Ce dernier argument n’a pas été entièrement reproduit, certaines études aboutissant à des conclusions positives et d’autres les rendant non concluantes. Dans les cas où les résultats ont été acceptés (en raison de la haute qualité de l’étude et de la taille raisonnable de l’échantillon), la dose était beaucoup trop élevée, ce qui pouvait exposer les personnes à un risque d’insuffisance hépatique aiguë. ##Extrait de graines de guarana (10 mg) L'extrait de graines de Guarana (GSE) est obtenu par saupoudrage des graines de fruits mûrs de la vigne Paullinia cupana. Encore une fois, nous avons un ingrédient au riche profil antioxydant comparable au thé vert. Puisque le guarana contient de la caféine, il peut aider à la concentration, à la cognition et à l’énergie mentale. Concernant la cognition, une étude aurait rapporté qu'une dose de 37,5 ou 75 mg augmentait de manière significative les résultats aux tests, concluant qu'elle pourrait être utile pour ceux qui souhaitent apprendre et mémoriser de nouvelles informations. Sa source de caféine joue également un rôle dans la perte de poids, notamment en stimulant le métabolisme (jusqu'à 11 % en 12 heures), en supprimant les gènes contribuant à la production de cellules adipeuses et en augmentant les gènes connus pour ralentir l'expansion des cellules adipeuses. Cependant, ces derniers résultats étaient basés sur des études en éprouvette et nécessitent des investigations plus approfondies au niveau humain. Les autres avantages incluent le soulagement de la diarrhée et de la constipation, une fonction cardiaque saine et un soulagement de la douleur. ##N-acétyl-L-carnitine (2 mg) L'acétyl-L-carnitine (ou ALCAR) est un acide aminé dont les tâches principales incluent l'augmentation de la production d'énergie cellulaire, l'augmentation de la combustion des graisses dans les mitochondries (la centrale électrique de nos cellules qui assure l'oxydation des acides gras) et le soutien de la fonction nerveuse. En termes de gestion du poids et/ou des graisses, le seul résultat prouvé est sa capacité à réduire la fatigue, les douleurs musculaires et le risque de développer des troubles du sommeil. Outre ce qui précède, cet ingrédient améliorerait l’humeur et la clarté mentale. Il est cependant important de noter que la dose est importante et dans ce cas une dose comprise entre 1000 et 2000 mg est souvent recommandée. ##Remarques finales En conclusion, FitSmart™ vise à aider les personnes cherchant à accélérer leurs efforts de combustion des graisses et de perte de poids. À première vue, certains ingrédients ont un impact direct sur la combustion des graisses, tandis que d’autres le font indirectement, que ce soit en augmentant le niveau d’énergie ou la concentration mentale. Cette dernière est impérative pour réussir vos séances de sport et être attentif à vos habitudes alimentaires. Contrairement aux fournisseurs de suppléments conventionnels qui prétendent que leurs approches sont la solution révélatrice à la perte de poids, l'équipe FitSmart™ insiste sur le fait que la leur est censée être complémentaire, soulignant l'importance d'un régime de gestion du poids approprié et équilibré. Dans ce cas, les gens doivent garder à l’esprit que FitSmart™ doit être associé à un régime hypocalorique. Sinon, les résultats pourraient ne pas se concrétiser. **[Cliquez ici pour acheter maintenant sur le site officiel de FitSmart](https://kapseln24x7.com/fitsmart-fr)**
manbeast3b/KinoInferlol
manbeast3b
"2024-06-16T17:41:44Z"
0
0
null
[ "region:us" ]
null
"2024-06-10T11:27:14Z"
Entry not found
Trisha2024/llama-2-7b-miniguanaco
Trisha2024
"2024-06-13T09:55:50Z"
0
0
transformers
[ "transformers", "pytorch", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-06-10T11:29:48Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]