modelId
stringlengths
5
122
author
stringlengths
2
42
last_modified
unknown
downloads
int64
0
738M
likes
int64
0
11k
library_name
stringclasses
245 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
48 values
createdAt
unknown
card
stringlengths
1
901k
ckpt/clarity-upscaler
ckpt
"2024-06-11T20:49:22Z"
0
0
null
[ "region:us" ]
null
"2024-06-11T20:47:01Z"
Entry not found
Polihazel/Hazel
Polihazel
"2024-06-11T20:48:13Z"
0
0
null
[ "license:c-uda", "region:us" ]
null
"2024-06-11T20:48:13Z"
--- license: c-uda ---
Quinntaveous/DaveGrohl-Singing-Model
Quinntaveous
"2024-06-11T20:51:55Z"
0
0
null
[ "license:openrail", "region:us" ]
null
"2024-06-11T20:50:05Z"
--- license: openrail ---
Jaisai/test-model
Jaisai
"2024-06-11T20:52:37Z"
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-06-11T20:52:02Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
kent-rachmat/granite-34b-code-instruct
kent-rachmat
"2024-06-11T20:52:24Z"
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-06-11T20:52:05Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
codingninja/w2v-pa-v2
codingninja
"2024-06-14T20:26:40Z"
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "wav2vec2-bert", "automatic-speech-recognition", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
"2024-06-11T20:54:25Z"
Entry not found
axssel/ana_medsoc
axssel
"2024-06-11T21:51:49Z"
0
0
null
[ "region:us" ]
null
"2024-06-11T20:55:12Z"
Entry not found
ymoslem/whisper-medium-ga2en-v5.2.2-r
ymoslem
"2024-06-12T01:00:22Z"
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "ga", "en", "dataset:ymoslem/IWSLT2023-GA-EN", "dataset:ymoslem/FLEURS-GA-EN", "dataset:ymoslem/BitesizeIrish-GA-EN", "dataset:ymoslem/SpokenWords-GA-EN-MTed", "dataset:ymoslem/Tatoeba-Speech-Irish", "dataset:ymoslem/Wikimedia-Speech-Irish", "base_model:openai/whisper-medium", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
"2024-06-11T21:03:22Z"
--- language: - ga - en license: apache-2.0 base_model: openai/whisper-medium tags: - generated_from_trainer datasets: - ymoslem/IWSLT2023-GA-EN - ymoslem/FLEURS-GA-EN - ymoslem/BitesizeIrish-GA-EN - ymoslem/SpokenWords-GA-EN-MTed - ymoslem/Tatoeba-Speech-Irish - ymoslem/Wikimedia-Speech-Irish metrics: - bleu - wer model-index: - name: Whisper Small GA-EN Speech Translation, 1 epoch, 10k steps results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: IWSLT-2023, FLEURS, BiteSize, SpokenWords, Tatoeba, and Wikimedia type: ymoslem/IWSLT2023-GA-EN metrics: - name: Bleu type: bleu value: 34.31 - name: Wer type: wer value: 59.70283656010806 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Small GA-EN Speech Translation, 1 epoch, 10k steps This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the IWSLT-2023, FLEURS, BiteSize, SpokenWords, Tatoeba, and Wikimedia dataset. It achieves the following results on the evaluation set: - Loss: 1.3521 - Bleu: 34.31 - Chrf: 52.5 - Wer: 59.7028 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.02 - training_steps: 13000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Bleu | Chrf | Validation Loss | Wer | |:-------------:|:------:|:-----:|:-----:|:-----:|:---------------:|:--------:| | 2.6291 | 0.0109 | 100 | 2.33 | 16.34 | 2.1971 | 175.5516 | | 2.6591 | 0.0219 | 200 | 5.57 | 22.49 | 2.0357 | 122.2873 | | 2.5637 | 0.0328 | 300 | 7.67 | 26.29 | 1.8690 | 133.0032 | | 2.2954 | 0.0438 | 400 | 11.2 | 30.03 | 1.8062 | 114.2278 | | 2.3292 | 0.0547 | 500 | 9.85 | 29.28 | 1.7421 | 117.2895 | | 2.1223 | 0.0657 | 600 | 14.56 | 32.56 | 1.6739 | 84.2864 | | 2.2398 | 0.0766 | 700 | 13.86 | 34.74 | 1.7187 | 98.9644 | | 2.002 | 0.0876 | 800 | 15.53 | 36.64 | 1.6392 | 96.7582 | | 1.8611 | 0.0985 | 900 | 15.8 | 36.32 | 1.6283 | 94.3719 | | 1.8498 | 0.1095 | 1000 | 17.58 | 36.0 | 1.6102 | 85.5921 | | 1.7585 | 0.1204 | 1100 | 15.91 | 36.61 | 1.6337 | 100.2251 | | 1.6115 | 0.1314 | 1200 | 22.21 | 39.94 | 1.5381 | 76.8122 | | 1.4415 | 0.1423 | 1300 | 20.36 | 37.87 | 1.5864 | 79.1986 | | 1.5103 | 0.1533 | 1400 | 23.2 | 41.26 | 1.4925 | 75.2364 | | 1.6576 | 0.1642 | 1500 | 18.12 | 40.49 | 1.4508 | 102.9266 | | 1.3429 | 0.1752 | 1600 | 27.88 | 43.74 | 1.4399 | 69.7884 | | 1.2522 | 0.1861 | 1700 | 23.04 | 43.31 | 1.4256 | 77.1724 | | 1.2018 | 0.1970 | 1800 | 21.06 | 40.39 | 1.4072 | 78.6583 | | 1.1945 | 0.2080 | 1900 | 23.0 | 42.71 | 1.4222 | 76.7222 | | 1.1869 | 0.2189 | 2000 | 22.54 | 42.02 | 1.3992 | 75.8667 | | 1.1752 | 0.2299 | 2100 | 20.81 | 41.07 | 1.3926 | 79.5137 | | 1.0281 | 0.2408 | 2200 | 27.24 | 45.55 | 1.3633 | 69.6083 | | 0.894 | 0.2518 | 2300 | 28.6 | 45.58 | 1.3287 | 65.8712 | | 0.9788 | 0.2627 | 2400 | 27.75 | 46.21 | 1.3138 | 69.2931 | | 0.8418 | 0.2737 | 2500 | 27.85 | 46.17 | 1.3064 | 68.3026 | | 0.7559 | 0.2846 | 2600 | 28.44 | 48.52 | 1.2903 | 68.3476 | | 0.8632 | 0.2956 | 2700 | 27.87 | 46.86 | 1.2834 | 68.3476 | | 0.7501 | 0.3065 | 2800 | 28.63 | 49.25 | 1.2669 | 68.5277 | | 0.6953 | 0.3175 | 2900 | 30.46 | 48.83 | 1.2615 | 64.4304 | | 0.7195 | 0.3284 | 3000 | 27.49 | 47.94 | 1.2514 | 71.0941 | | 0.6155 | 0.3394 | 3100 | 30.06 | 49.64 | 1.2428 | 66.5916 | | 0.605 | 0.3503 | 3200 | 31.64 | 50.27 | 1.2040 | 63.8451 | | 0.6349 | 0.3612 | 3300 | 28.96 | 49.35 | 1.2077 | 65.3760 | | 0.4669 | 0.3722 | 3400 | 31.17 | 48.95 | 1.2219 | 64.2503 | | 0.5196 | 0.3831 | 3500 | 30.97 | 50.13 | 1.2124 | 63.8001 | | 0.5141 | 0.3941 | 3600 | 31.97 | 50.8 | 1.2026 | 63.0347 | | 0.4221 | 0.4050 | 3700 | 31.76 | 51.35 | 1.1893 | 63.4399 | | 0.2951 | 0.4160 | 3800 | 32.4 | 51.08 | 1.2049 | 63.1247 | | 0.3898 | 0.4269 | 3900 | 32.15 | 51.09 | 1.1906 | 63.5299 | | 0.4071 | 0.4379 | 4000 | 33.1 | 51.85 | 1.1873 | 62.4043 | | 0.3975 | 0.4488 | 4100 | 29.58 | 49.33 | 1.2117 | 70.3287 | | 0.4206 | 0.4598 | 4200 | 31.69 | 50.8 | 1.2150 | 65.0158 | | 0.2935 | 0.4707 | 4300 | 32.9 | 50.01 | 1.2484 | 62.8546 | | 0.3718 | 0.4817 | 4400 | 31.64 | 50.55 | 1.2055 | 63.8451 | | 0.3722 | 0.4926 | 4500 | 28.16 | 49.28 | 1.2200 | 70.4638 | | 0.2986 | 0.5036 | 4600 | 28.76 | 49.9 | 1.2240 | 68.7528 | | 0.3327 | 0.5145 | 4700 | 29.34 | 49.67 | 1.2052 | 67.5822 | | 0.2489 | 0.5255 | 4800 | 32.52 | 51.77 | 1.2083 | 62.4493 | | 0.3653 | 0.5364 | 4900 | 31.48 | 51.16 | 1.2166 | 63.8451 | | 0.3326 | 0.5473 | 5000 | 33.04 | 51.71 | 1.2169 | 62.4493 | | 0.3045 | 0.5583 | 5100 | 27.45 | 48.22 | 1.2460 | 68.9779 | | 0.3444 | 0.5692 | 5200 | 33.14 | 50.76 | 1.2829 | 62.2692 | | 0.3236 | 0.5802 | 5300 | 28.89 | 49.37 | 1.2499 | 70.3737 | | 0.3004 | 0.5911 | 5400 | 29.89 | 49.29 | 1.3165 | 68.7078 | | 0.3019 | 0.6021 | 5500 | 32.8 | 49.78 | 1.2782 | 62.8095 | | 0.2923 | 0.6130 | 5600 | 31.75 | 50.26 | 1.2468 | 63.3498 | | 0.3237 | 0.6240 | 5700 | 34.4 | 52.59 | 1.2511 | 61.0986 | | 0.2226 | 0.6349 | 5800 | 30.51 | 50.38 | 1.2479 | 63.3498 | | 0.2207 | 0.6459 | 5900 | 32.68 | 51.97 | 1.2641 | 62.1342 | | 0.2017 | 0.6568 | 6000 | 32.47 | 51.36 | 1.2640 | 62.6745 | | 0.201 | 0.6678 | 6100 | 33.6 | 52.29 | 1.2774 | 61.4588 | | 0.203 | 0.6787 | 6200 | 30.27 | 50.84 | 1.2670 | 65.6461 | | 0.1456 | 0.6897 | 6300 | 31.2 | 51.05 | 1.2656 | 63.3048 | | 0.1607 | 0.7006 | 6400 | 30.39 | 51.04 | 1.2611 | 65.8262 | | 0.1933 | 0.7115 | 6500 | 31.78 | 50.92 | 1.2545 | 63.0797 | | 0.1537 | 0.7225 | 6600 | 30.18 | 50.18 | 1.2500 | 64.7006 | | 0.1279 | 0.7334 | 6700 | 33.23 | 51.0 | 1.2548 | 59.8379 | | 0.1189 | 0.7444 | 6800 | 33.51 | 50.67 | 1.2594 | 61.1887 | | 0.1056 | 0.7553 | 6900 | 32.97 | 51.02 | 1.2578 | 61.9991 | | 0.1105 | 0.7663 | 7000 | 32.74 | 50.83 | 1.2569 | 62.0441 | | 0.1183 | 0.7772 | 7100 | 34.07 | 52.2 | 1.2590 | 60.4232 | | 0.1373 | 0.7882 | 7200 | 33.55 | 50.6 | 1.2430 | 61.2787 | | 0.1325 | 0.7991 | 7300 | 32.36 | 50.39 | 1.2548 | 62.3143 | | 0.0907 | 0.8101 | 7400 | 32.28 | 50.99 | 1.2578 | 61.2787 | | 0.0919 | 0.8210 | 7500 | 33.01 | 51.81 | 1.2791 | 60.4683 | | 0.0852 | 0.8320 | 7600 | 32.97 | 51.56 | 1.2782 | 61.5489 | | 0.1223 | 0.8429 | 7700 | 33.57 | 52.33 | 1.2638 | 59.9280 | | 0.0826 | 0.8539 | 7800 | 33.83 | 52.7 | 1.2634 | 60.1531 | | 0.0783 | 0.8648 | 7900 | 33.79 | 52.31 | 1.2595 | 60.1081 | | 0.0986 | 0.8758 | 8000 | 34.33 | 52.54 | 1.2608 | 59.4327 | | 0.1148 | 0.8867 | 8100 | 34.03 | 52.52 | 1.2736 | 59.8829 | | 0.1134 | 0.8976 | 8200 | 34.14 | 51.64 | 1.3073 | 61.5038 | | 0.1166 | 0.9086 | 8300 | 30.51 | 49.26 | 1.3385 | 65.5561 | | 0.0871 | 0.9195 | 8400 | 32.31 | 51.06 | 1.3313 | 62.5394 | | 0.0927 | 0.9305 | 8500 | 28.64 | 48.43 | 1.3898 | 69.3832 | | 0.1012 | 0.9414 | 8600 | 33.12 | 52.02 | 1.3144 | 61.4138 | | 0.0742 | 0.9524 | 8700 | 33.68 | 51.38 | 1.3284 | 61.7740 | | 0.0802 | 0.9633 | 8800 | 34.33 | 51.38 | 1.3300 | 61.4138 | | 0.0799 | 0.9743 | 8900 | 33.72 | 50.77 | 1.3328 | 60.1981 | | 0.0936 | 0.9852 | 9000 | 34.76 | 51.4 | 1.3181 | 60.0630 | | 0.1091 | 0.9962 | 9100 | 35.13 | 52.6 | 1.3096 | 59.9730 | | 0.0427 | 1.0071 | 9200 | 35.49 | 53.12 | 1.2905 | 59.8379 | | 0.0338 | 1.0181 | 9300 | 35.33 | 52.62 | 1.3097 | 60.5133 | | 0.0363 | 1.0290 | 9400 | 35.51 | 53.06 | 1.3172 | 59.6128 | | 0.0319 | 1.0400 | 9500 | 36.82 | 53.6 | 1.3166 | 58.3971 | | 0.0434 | 1.0509 | 9600 | 35.62 | 53.28 | 1.3050 | 59.6578 | | 0.0218 | 1.0619 | 9700 | 35.57 | 53.28 | 1.3096 | 59.5227 | | 0.0316 | 1.0728 | 9800 | 36.14 | 53.87 | 1.3162 | 58.3971 | | 0.0315 | 1.0837 | 9900 | 36.26 | 54.16 | 1.3121 | 58.3521 | | 0.0229 | 1.0947 | 10000 | 36.12 | 53.74 | 1.3134 | 58.3071 | | 0.0561 | 1.1056 | 10100 | 34.27 | 53.3 | 1.3263 | 61.0086 | | 0.0485 | 1.1166 | 10200 | 34.26 | 53.1 | 1.3319 | 60.6934 | | 0.0582 | 1.1275 | 10300 | 30.37 | 51.24 | 1.3893 | 70.2837 | | 0.0559 | 1.1385 | 10400 | 31.61 | 49.4 | 1.4005 | 66.0513 | | 0.055 | 1.1494 | 10500 | 31.93 | 50.99 | 1.3793 | 65.0608 | | 0.0612 | 1.1604 | 10600 | 33.31 | 51.91 | 1.3749 | 62.9896 | | 0.0599 | 1.1713 | 10700 | 33.87 | 52.96 | 1.3679 | 61.7740 | | 0.0536 | 1.1823 | 10800 | 32.54 | 51.57 | 1.3313 | 62.2692 | | 0.0531 | 1.1932 | 10900 | 33.83 | 52.11 | 1.3883 | 61.9991 | | 0.0582 | 1.2042 | 11000 | 33.18 | 51.63 | 1.3894 | 61.5038 | | 0.0506 | 1.2151 | 11100 | 32.51 | 51.24 | 1.3338 | 63.5299 | | 0.0489 | 1.2261 | 11200 | 32.95 | 51.53 | 1.3625 | 64.2053 | | 0.0387 | 1.2370 | 11300 | 34.5 | 52.47 | 1.3496 | 60.4232 | | 0.0512 | 1.2479 | 11400 | 34.5 | 52.72 | 1.3731 | 60.6934 | | 0.0459 | 1.2589 | 11500 | 33.27 | 51.89 | 1.3655 | 62.8996 | | 0.0457 | 1.2698 | 11600 | 30.26 | 49.96 | 1.3824 | 67.7623 | | 0.0407 | 1.2808 | 11700 | 31.56 | 51.37 | 1.3775 | 62.9446 | | 0.0396 | 1.2917 | 11800 | 34.06 | 51.91 | 1.3677 | 59.6128 | | 0.0419 | 1.3027 | 11900 | 34.18 | 52.77 | 1.3648 | 60.1081 | | 0.0291 | 1.3136 | 12000 | 33.9 | 51.61 | 1.3697 | 60.6934 | | 0.0351 | 1.3246 | 12100 | 34.66 | 53.1 | 1.3565 | 60.5133 | | 0.0329 | 1.3355 | 12200 | 33.59 | 53.0 | 1.3592 | 61.8190 | | 0.0409 | 1.3465 | 12300 | 34.41 | 52.96 | 1.3690 | 59.6578 | | 0.0386 | 1.3574 | 12400 | 34.68 | 53.26 | 1.3440 | 59.1175 | | 0.0221 | 1.3684 | 12500 | 33.35 | 51.9 | 1.3450 | 60.3332 | | 0.032 | 1.3793 | 12600 | 33.09 | 52.07 | 1.3514 | 62.3143 | | 0.0364 | 1.3903 | 12700 | 34.08 | 52.49 | 1.3538 | 60.0630 | | 0.024 | 1.4012 | 12800 | 34.75 | 53.14 | 1.3451 | 58.8474 | | 0.0245 | 1.4122 | 12900 | 34.09 | 52.38 | 1.3544 | 59.7479 | | 0.0271 | 1.4231 | 13000 | 1.3521| 34.31 | 52.5 | 59.7028 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.2.0+cu121 - Datasets 2.19.2 - Tokenizers 0.19.1
rafaeloc15/llama3-v3
rafaeloc15
"2024-06-11T21:09:30Z"
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-11T21:04:26Z"
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl base_model: unsloth/llama-3-8b-bnb-4bit --- # Uploaded model - **Developed by:** rafaeloc15 - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
kristiannordby/multi-category_atlatl_model
kristiannordby
"2024-06-12T21:16:05Z"
0
0
null
[ "safetensors", "generated_from_trainer", "region:us" ]
null
"2024-06-11T21:05:38Z"
--- tags: - generated_from_trainer model-index: - name: multi-category_atlatl_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # multi-category_atlatl_model This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.4056 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.2982 | 1.0 | 48 | 1.1416 | | 0.4111 | 2.0 | 96 | 1.1609 | | 0.1753 | 3.0 | 144 | 1.1880 | | 0.1085 | 4.0 | 192 | 1.4056 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.1+cu121 - Datasets 2.19.2 - Tokenizers 0.19.1
YaTharThShaRma999/rvc_models
YaTharThShaRma999
"2024-06-15T21:41:07Z"
0
0
null
[ "license:apache-2.0", "region:us" ]
null
"2024-06-11T21:06:24Z"
--- license: apache-2.0 ---
janetyu/distilbert-base-uncased-finetuned-imdb
janetyu
"2024-06-12T19:41:51Z"
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "fill-mask", "generated_from_trainer", "base_model:distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
"2024-06-11T21:06:36Z"
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer model-index: - name: distilbert-base-uncased-finetuned-imdb results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-imdb This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - eval_loss: 3.0882 - eval_runtime: 404.4745 - eval_samples_per_second: 2.472 - eval_steps_per_second: 0.04 - step: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.19.2 - Tokenizers 0.19.1
A01794620/distilbert-base-cased-finetuned-emotion
A01794620
"2024-06-11T21:06:57Z"
0
0
null
[ "region:us" ]
null
"2024-06-11T21:06:57Z"
Entry not found
Obaaaaa/Luffy04
Obaaaaa
"2024-06-11T21:08:43Z"
0
0
null
[ "license:openrail", "region:us" ]
null
"2024-06-11T21:07:15Z"
--- license: openrail ---
TopperThijs/Llama2-Open-ended-Finetuned-6epochs25mlm
TopperThijs
"2024-06-11T22:30:12Z"
0
0
null
[ "tensorboard", "safetensors", "region:us" ]
null
"2024-06-11T21:07:41Z"
Entry not found
ahad-j/q-Taxi-v3
ahad-j
"2024-06-11T21:19:16Z"
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
"2024-06-11T21:19:14Z"
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.54 +/- 2.73 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage model = load_from_hub(repo_id="ahad-j/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"])
C0ttontheBunny/HL2Models
C0ttontheBunny
"2024-06-11T21:22:24Z"
0
0
null
[ "license:openrail", "region:us" ]
null
"2024-06-11T21:22:10Z"
--- license: openrail ---
katk31/YOUR_REPO_ID
katk31
"2024-06-11T21:24:48Z"
0
0
null
[ "region:us" ]
null
"2024-06-11T21:24:48Z"
Entry not found
C0ttontheBunny/ToyStorymodels
C0ttontheBunny
"2024-06-11T21:26:52Z"
0
0
null
[ "license:openrail", "region:us" ]
null
"2024-06-11T21:26:40Z"
--- license: openrail ---
fisica/fisica
fisica
"2024-06-11T21:30:53Z"
0
0
null
[ "region:us" ]
null
"2024-06-11T21:30:53Z"
Entry not found
MihaC/llama3-8b-cosmic-fusion-dynamics-lora
MihaC
"2024-06-11T21:33:37Z"
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-11T21:33:12Z"
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl base_model: unsloth/llama-3-8b-bnb-4bit --- # Uploaded model - **Developed by:** MihaC - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
NaveenHugs/llama-3-8b-Instruct-bnb-4bit-dadJokes
NaveenHugs
"2024-06-12T22:57:00Z"
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-Instruct-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-11T21:33:39Z"
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl base_model: unsloth/llama-3-8b-Instruct-bnb-4bit --- # Uploaded model - **Developed by:** NaveenHugs - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
RandomlyCreatedAI/FunnyBot
RandomlyCreatedAI
"2024-06-11T21:41:11Z"
0
0
null
[ "region:us" ]
null
"2024-06-11T21:39:50Z"
Entry not found
damnshigu/Gayoon
damnshigu
"2024-06-11T21:47:02Z"
0
0
null
[ "region:us" ]
null
"2024-06-11T21:41:55Z"
Entry not found
shuyuej/MedGemma2B-Spanish
shuyuej
"2024-06-11T22:33:57Z"
0
1
null
[ "safetensors", "license:apache-2.0", "region:us" ]
null
"2024-06-11T21:44:17Z"
--- license: apache-2.0 ---
Paco4365483/finetune2
Paco4365483
"2024-06-11T21:52:01Z"
0
0
transformers
[ "transformers", "safetensors", "llava_llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2024-06-11T21:44:55Z"
Entry not found
damnshigu/Jihyun
damnshigu
"2024-06-11T21:50:35Z"
0
0
null
[ "region:us" ]
null
"2024-06-11T21:47:25Z"
Entry not found
damnshigu/Jiyoon
damnshigu
"2024-06-11T21:54:44Z"
0
0
null
[ "region:us" ]
null
"2024-06-11T21:51:09Z"
Entry not found
CROPART/TESTE1
CROPART
"2024-06-11T21:51:37Z"
0
0
null
[ "region:us" ]
null
"2024-06-11T21:51:37Z"
Entry not found
oualidlamrini/lsg-classification-ocr-4096
oualidlamrini
"2024-06-11T21:53:28Z"
0
0
null
[ "region:us" ]
null
"2024-06-11T21:53:28Z"
Entry not found
ogbi/ika-mms-1bv2
ogbi
"2024-06-11T21:53:55Z"
0
0
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-06-11T21:53:54Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
abhayesian/LLama2_HarmBench_NoAttack_3
abhayesian
"2024-06-12T01:25:50Z"
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-06-11T21:56:38Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
AliHaider0343/Term-Tokenizor
AliHaider0343
"2024-06-11T21:57:56Z"
0
0
transformers
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-06-11T21:57:39Z"
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
jajca/onlymakingthistoarchiveflps
jajca
"2024-06-11T22:02:38Z"
0
0
null
[ "region:us" ]
null
"2024-06-11T22:00:58Z"
Entry not found
SauravMaheshkar/simclrv1-imagenet1k-resnet50-1x
SauravMaheshkar
"2024-06-11T22:09:55Z"
0
0
null
[ "self-supervised learning", "dataset:ILSVRC/imagenet-1k", "arxiv:2002.05709", "license:apache-2.0", "region:us" ]
null
"2024-06-11T22:02:23Z"
--- license: apache-2.0 datasets: - ILSVRC/imagenet-1k metrics: - accuracy tags: - self-supervised learning --- Official PyTorch converted weights of [SimCLRv1](https://arxiv.org/abs/2002.05709). Conversion script from [tonylins/simclr-converter](https://github.com/tonylins/simclr-converter) ```misc @article{chen2020simple, title={A Simple Framework for Contrastive Learning of Visual Representations}, author={Chen, Ting and Kornblith, Simon and Norouzi, Mohammad and Hinton, Geoffrey}, journal={arXiv preprint arXiv:2002.05709}, year={2020} } ```
archiesarrewood/geography_shapes.parquet
archiesarrewood
"2024-06-11T22:12:01Z"
0
0
null
[ "region:us" ]
null
"2024-06-11T22:02:40Z"
Entry not found
iamanaiart/LCM-westernAnimation_v1-openvino
iamanaiart
"2024-06-11T22:08:27Z"
0
0
null
[ "region:us" ]
null
"2024-06-11T22:05:57Z"
Entry not found
Petrozi/Z-1
Petrozi
"2024-06-11T22:14:00Z"
0
0
null
[ "region:us" ]
null
"2024-06-11T22:09:52Z"
Entry not found
SauravMaheshkar/simclrv1-imagenet1k-resnet50-2x
SauravMaheshkar
"2024-06-11T22:10:59Z"
0
0
null
[ "self-supervised learning", "dataset:ILSVRC/imagenet-1k", "arxiv:2002.05709", "license:apache-2.0", "region:us" ]
null
"2024-06-11T22:10:12Z"
--- license: apache-2.0 datasets: - ILSVRC/imagenet-1k metrics: - accuracy tags: - self-supervised learning --- Official PyTorch converted weights of [SimCLRv1](https://arxiv.org/abs/2002.05709). Conversion script from [tonylins/simclr-converter](https://github.com/tonylins/simclr-converter) ```misc @article{chen2020simple, title={A Simple Framework for Contrastive Learning of Visual Representations}, author={Chen, Ting and Kornblith, Simon and Norouzi, Mohammad and Hinton, Geoffrey}, journal={arXiv preprint arXiv:2002.05709}, year={2020} } ```
arqpriscila/pri
arqpriscila
"2024-06-11T22:11:24Z"
0
0
null
[ "region:us" ]
null
"2024-06-11T22:11:24Z"
Entry not found
SauravMaheshkar/simclrv1-imagenet1k-resnet50-4x
SauravMaheshkar
"2024-06-11T22:12:42Z"
0
0
null
[ "self-supervised learning", "dataset:ILSVRC/imagenet-1k", "arxiv:2002.05709", "license:apache-2.0", "region:us" ]
null
"2024-06-11T22:11:34Z"
--- license: apache-2.0 datasets: - ILSVRC/imagenet-1k metrics: - accuracy tags: - self-supervised learning --- Official PyTorch converted weights of [SimCLRv1](https://arxiv.org/abs/2002.05709). Conversion script from [tonylins/simclr-converter](https://github.com/tonylins/simclr-converter) ```misc @article{chen2020simple, title={A Simple Framework for Contrastive Learning of Visual Representations}, author={Chen, Ting and Kornblith, Simon and Norouzi, Mohammad and Hinton, Geoffrey}, journal={arXiv preprint arXiv:2002.05709}, year={2020} } ```
MadBonze/whisper-base-gztan-classification
MadBonze
"2024-06-12T16:59:35Z"
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "audio-classification", "endpoints_compatible", "region:us" ]
audio-classification
"2024-06-11T22:13:35Z"
Entry not found
jamescraiggg/autotrain-znipi-u0jhw
jamescraiggg
"2024-06-11T22:16:47Z"
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "autotrain", "text-generation-inference", "text-generation", "peft", "conversational", "dataset:yezhengli9/wmt20-en-de", "base_model:Qwen/Qwen2-1.5B-Instruct", "license:other", "endpoints_compatible", "region:us" ]
text-generation
"2024-06-11T22:14:46Z"
--- tags: - autotrain - text-generation-inference - text-generation - peft library_name: transformers base_model: Qwen/Qwen2-1.5B-Instruct widget: - messages: - role: user content: What is your favorite condiment? license: other datasets: - yezhengli9/wmt20-en-de --- # Model Trained Using AutoTrain This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain). # Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_path = "PATH_TO_THIS_REPO" tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForCausalLM.from_pretrained( model_path, device_map="auto", torch_dtype='auto' ).eval() # Prompt content: "hi" messages = [ {"role": "user", "content": "hi"} ] input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt') output_ids = model.generate(input_ids.to('cuda')) response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True) # Model response: "Hello! How can I assist you today?" print(response) ```
shahd-2005k/Llama-2-7b-chat-hf
shahd-2005k
"2024-06-11T22:14:58Z"
0
0
null
[ "region:us" ]
null
"2024-06-11T22:14:58Z"
Entry not found
Maouu/billythecook
Maouu
"2024-06-12T14:13:18Z"
0
0
null
[ "region:us" ]
null
"2024-06-11T22:17:51Z"
Entry not found
SauravMaheshkar/simclrv2-imagenet1k-r50_1x_sk0
SauravMaheshkar
"2024-06-11T22:24:06Z"
0
0
null
[ "self-supervised learning", "dataset:ILSVRC/imagenet-1k", "arxiv:2006.10029", "license:apache-2.0", "region:us" ]
null
"2024-06-11T22:23:16Z"
--- license: apache-2.0 datasets: - ILSVRC/imagenet-1k metrics: - accuracy tags: - self-supervised learning --- Official PyTorch converted weights of [SimCLRv2](https://arxiv.org/abs/2006.10029). Conversion script from [Separius/SimCLRv2-Pytorch](https://github.com/Separius/SimCLRv2-Pytorch) ```misc @article{chen2020big, title={Big Self-Supervised Models are Strong Semi-Supervised Learners}, author={Chen, Ting and Kornblith, Simon and Swersky, Kevin and Norouzi, Mohammad and Hinton, Geoffrey}, journal={arXiv preprint arXiv:2006.10029}, year={2020} } ```
llm-wizard/llama38binstruct_summarize
llm-wizard
"2024-06-11T22:44:19Z"
0
0
peft
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "dataset:generator", "base_model:NousResearch/Meta-Llama-3-8B-Instruct", "license:other", "region:us" ]
null
"2024-06-11T22:24:09Z"
--- license: other library_name: peft tags: - trl - sft - generated_from_trainer base_model: NousResearch/Meta-Llama-3-8B-Instruct datasets: - generator model-index: - name: llama38binstruct_summarize results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # llama38binstruct_summarize This model is a fine-tuned version of [NousResearch/Meta-Llama-3-8B-Instruct](https://huggingface.co/NousResearch/Meta-Llama-3-8B-Instruct) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 1.6753 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_steps: 0.03 - training_steps: 100 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.4436 | 1.1905 | 25 | 1.0958 | | 0.5989 | 2.3810 | 50 | 1.2958 | | 0.2448 | 3.5714 | 75 | 1.5235 | | 0.099 | 4.7619 | 100 | 1.6753 | ### Framework versions - PEFT 0.11.1 - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.19.2 - Tokenizers 0.19.1
gomasho/ONEY111
gomasho
"2024-06-11T22:30:50Z"
0
0
null
[ "license:openrail", "region:us" ]
null
"2024-06-11T22:24:25Z"
--- license: openrail ---
SauravMaheshkar/simclrv2-imagenet1k-r50_1x_sk1
SauravMaheshkar
"2024-06-11T22:26:56Z"
0
0
null
[ "self-supervised learning", "dataset:ILSVRC/imagenet-1k", "arxiv:2006.10029", "license:apache-2.0", "region:us" ]
null
"2024-06-11T22:26:24Z"
--- license: apache-2.0 datasets: - ILSVRC/imagenet-1k metrics: - accuracy tags: - self-supervised learning --- Official PyTorch converted weights of [SimCLRv2](https://arxiv.org/abs/2006.10029). Conversion script from [Separius/SimCLRv2-Pytorch](https://github.com/Separius/SimCLRv2-Pytorch) ```misc @article{chen2020big, title={Big Self-Supervised Models are Strong Semi-Supervised Learners}, author={Chen, Ting and Kornblith, Simon and Swersky, Kevin and Norouzi, Mohammad and Hinton, Geoffrey}, journal={arXiv preprint arXiv:2006.10029}, year={2020} } ```
SauravMaheshkar/simclrv2-imagenet1k-r50_2x_sk0
SauravMaheshkar
"2024-06-11T22:45:42Z"
0
0
null
[ "self-supervised learning", "dataset:ILSVRC/imagenet-1k", "arxiv:2006.10029", "license:apache-2.0", "region:us" ]
null
"2024-06-11T22:27:41Z"
--- license: apache-2.0 datasets: - ILSVRC/imagenet-1k metrics: - accuracy tags: - self-supervised learning --- Official PyTorch converted weights of [SimCLRv2](https://arxiv.org/abs/2006.10029). Conversion script from [Separius/SimCLRv2-Pytorch](https://github.com/Separius/SimCLRv2-Pytorch) ```misc @article{chen2020big, title={Big Self-Supervised Models are Strong Semi-Supervised Learners}, author={Chen, Ting and Kornblith, Simon and Swersky, Kevin and Norouzi, Mohammad and Hinton, Geoffrey}, journal={arXiv preprint arXiv:2006.10029}, year={2020} } ```
LarryAIDraw/emilie
LarryAIDraw
"2024-06-11T22:39:57Z"
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
"2024-06-11T22:32:54Z"
--- license: creativeml-openrail-m --- https://civitai.com/models/509545/emilie-genshin-impact
davidhhmack/basic_dpo_model
davidhhmack
"2024-06-11T22:33:22Z"
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "mistral", "trl", "en", "base_model:unsloth/mistral-7b-instruct-v0.2-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-11T22:33:02Z"
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - mistral - trl base_model: unsloth/mistral-7b-instruct-v0.2-bnb-4bit --- # Uploaded model - **Developed by:** davidhhmack - **License:** apache-2.0 - **Finetuned from model :** unsloth/mistral-7b-instruct-v0.2-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
LarryAIDraw/irohaIsshiki_XL-Pony_LoRA-C3Lier_8-8-8-8_AdamW_Un3e-4_Te1_5e-4_10batch
LarryAIDraw
"2024-06-11T22:40:11Z"
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
"2024-06-11T22:33:33Z"
--- license: creativeml-openrail-m --- https://civitai.com/models/506252/request-iroha-isshiki-oregairu-my-teen-romantic-comedy-snafu-sdxl-pony-diffusion
LarryAIDraw/YinlinWWv1
LarryAIDraw
"2024-06-11T22:40:20Z"
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
"2024-06-11T22:34:07Z"
--- license: creativeml-openrail-m --- https://civitai.com/models/280746/yinlin-wuthering-waves-character
LarryAIDraw/ys259pony_v10
LarryAIDraw
"2024-06-11T22:40:36Z"
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
"2024-06-11T22:34:50Z"
--- license: creativeml-openrail-m --- https://civitai.com/models/509311/genshinimpactcharacterseries6ponylora
LarryAIDraw/clorinde_kozue
LarryAIDraw
"2024-06-11T22:40:45Z"
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
"2024-06-11T22:35:33Z"
--- license: creativeml-openrail-m --- https://civitai.com/models/499609/clorinde-genshin-impact
LarryAIDraw/kashima_pony
LarryAIDraw
"2024-06-11T22:41:02Z"
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
"2024-06-11T22:36:30Z"
--- license: creativeml-openrail-m --- https://civitai.com/models/508234/pony-xl-kashima-kantai-collection
Dumele/viv-beta-mistral
Dumele
"2024-06-12T13:10:41Z"
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-06-11T22:37:14Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
josejointriple/brand_classification_1_20240611
josejointriple
"2024-06-11T22:38:55Z"
0
0
null
[ "region:us" ]
null
"2024-06-11T22:38:55Z"
Entry not found
AwesomeEmerald/BusyMenChat
AwesomeEmerald
"2024-06-11T22:42:27Z"
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-Instruct-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-11T22:42:16Z"
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl base_model: unsloth/llama-3-8b-Instruct-bnb-4bit --- # Uploaded model - **Developed by:** AwesomeEmerald - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Augusto777/vit-base-patch16-224-ve-b-U10-12
Augusto777
"2024-06-11T22:53:38Z"
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:google/vit-base-patch16-224", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
"2024-06-11T22:48:46Z"
--- license: apache-2.0 base_model: google/vit-base-patch16-224 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: vit-base-patch16-224-ve-b-U10-12 results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: validation args: default metrics: - name: Accuracy type: accuracy value: 0.7450980392156863 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-patch16-224-ve-b-U10-12 This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.9868 - Accuracy: 0.7451 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5.5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.05 - num_epochs: 12 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 0.96 | 6 | 1.3771 | 0.3137 | | 1.3705 | 1.92 | 12 | 1.3219 | 0.5490 | | 1.3705 | 2.88 | 18 | 1.2517 | 0.5490 | | 1.2535 | 4.0 | 25 | 1.1875 | 0.5882 | | 1.1079 | 4.96 | 31 | 1.1237 | 0.6078 | | 1.1079 | 5.92 | 37 | 1.1003 | 0.6275 | | 1.0048 | 6.88 | 43 | 1.0609 | 0.6863 | | 0.9172 | 8.0 | 50 | 1.0668 | 0.6078 | | 0.9172 | 8.96 | 56 | 1.0031 | 0.6667 | | 0.8558 | 9.92 | 62 | 0.9868 | 0.7451 | | 0.8558 | 10.88 | 68 | 0.9763 | 0.7451 | | 0.8284 | 11.52 | 72 | 0.9733 | 0.7451 | ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2+cu118 - Datasets 2.16.1 - Tokenizers 0.15.0
jointriple/brand_classification_1_20240611_tokenizer
jointriple
"2024-06-11T22:49:57Z"
0
0
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:eu" ]
null
"2024-06-11T22:49:55Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
SauravMaheshkar/simclrv2-imagenet1k-r50_2x_sk1
SauravMaheshkar
"2024-06-11T22:57:20Z"
0
0
null
[ "self-supervised learning", "dataset:ILSVRC/imagenet-1k", "arxiv:2006.10029", "license:apache-2.0", "region:us" ]
null
"2024-06-11T22:54:19Z"
--- license: apache-2.0 datasets: - ILSVRC/imagenet-1k metrics: - accuracy tags: - self-supervised learning --- Official PyTorch converted weights of [SimCLRv2](https://arxiv.org/abs/2006.10029). Conversion script from [Separius/SimCLRv2-Pytorch](https://github.com/Separius/SimCLRv2-Pytorch) ```misc @article{chen2020big, title={Big Self-Supervised Models are Strong Semi-Supervised Learners}, author={Chen, Ting and Kornblith, Simon and Swersky, Kevin and Norouzi, Mohammad and Hinton, Geoffrey}, journal={arXiv preprint arXiv:2006.10029}, year={2020} } ```
Augusto777/vit-base-patch16-224-ve-b-U10-24
Augusto777
"2024-06-11T23:02:56Z"
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:google/vit-base-patch16-224", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
"2024-06-11T22:54:51Z"
--- license: apache-2.0 base_model: google/vit-base-patch16-224 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: vit-base-patch16-224-ve-b-U10-24 results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: validation args: default metrics: - name: Accuracy type: accuracy value: 0.8431372549019608 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-patch16-224-ve-b-U10-24 This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.6432 - Accuracy: 0.8431 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5.5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.05 - num_epochs: 24 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 0.96 | 6 | 1.3827 | 0.3137 | | 1.378 | 1.92 | 12 | 1.3335 | 0.5490 | | 1.378 | 2.88 | 18 | 1.2577 | 0.5882 | | 1.2725 | 4.0 | 25 | 1.1886 | 0.4706 | | 1.1073 | 4.96 | 31 | 1.1040 | 0.6275 | | 1.1073 | 5.92 | 37 | 1.0658 | 0.6078 | | 0.9657 | 6.88 | 43 | 1.0155 | 0.6667 | | 0.8361 | 8.0 | 50 | 0.9330 | 0.7451 | | 0.8361 | 8.96 | 56 | 0.9690 | 0.6667 | | 0.7181 | 9.92 | 62 | 0.8910 | 0.7255 | | 0.7181 | 10.88 | 68 | 0.8953 | 0.6863 | | 0.6126 | 12.0 | 75 | 0.8343 | 0.7451 | | 0.5096 | 12.96 | 81 | 0.8048 | 0.7059 | | 0.5096 | 13.92 | 87 | 0.7977 | 0.7059 | | 0.4348 | 14.88 | 93 | 0.7250 | 0.7451 | | 0.4011 | 16.0 | 100 | 0.6432 | 0.8431 | | 0.4011 | 16.96 | 106 | 0.7317 | 0.7255 | | 0.3292 | 17.92 | 112 | 0.7015 | 0.7451 | | 0.3292 | 18.88 | 118 | 0.6248 | 0.7647 | | 0.309 | 20.0 | 125 | 0.6990 | 0.7451 | | 0.2744 | 20.96 | 131 | 0.6591 | 0.7843 | | 0.2744 | 21.92 | 137 | 0.6452 | 0.7647 | | 0.2864 | 22.88 | 143 | 0.6290 | 0.7843 | | 0.2864 | 23.04 | 144 | 0.6285 | 0.7843 | ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2+cu118 - Datasets 2.16.1 - Tokenizers 0.15.0
rashid996958/pix2pix_exp27
rashid996958
"2024-06-11T22:55:15Z"
0
0
null
[ "region:us" ]
null
"2024-06-11T22:55:11Z"
Entry not found
Ramikan-BR/tinyllama-coder-py-LORA-v23
Ramikan-BR
"2024-06-11T22:56:48Z"
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/tinyllama-chat-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-11T22:56:01Z"
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl base_model: unsloth/tinyllama-chat-bnb-4bit --- # Uploaded model - **Developed by:** Ramikan-BR - **License:** apache-2.0 - **Finetuned from model :** unsloth/tinyllama-chat-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Meitt/speecht5_tts_voxpopuli_nl
Meitt
"2024-06-11T22:57:50Z"
0
0
null
[ "region:us" ]
null
"2024-06-11T22:57:50Z"
Entry not found
SauravMaheshkar/simclrv2-imagenet1k-r101_1x_sk0
SauravMaheshkar
"2024-06-11T23:01:30Z"
0
0
null
[ "self-supervised learning", "dataset:ILSVRC/imagenet-1k", "arxiv:2006.10029", "license:apache-2.0", "region:us" ]
null
"2024-06-11T22:58:11Z"
--- license: apache-2.0 datasets: - ILSVRC/imagenet-1k metrics: - accuracy tags: - self-supervised learning --- Official PyTorch converted weights of [SimCLRv2](https://arxiv.org/abs/2006.10029). Conversion script from [Separius/SimCLRv2-Pytorch](https://github.com/Separius/SimCLRv2-Pytorch) ```misc @article{chen2020big, title={Big Self-Supervised Models are Strong Semi-Supervised Learners}, author={Chen, Ting and Kornblith, Simon and Swersky, Kevin and Norouzi, Mohammad and Hinton, Geoffrey}, journal={arXiv preprint arXiv:2006.10029}, year={2020} } ```
RandomlyCreatedAI/RandyMarsh
RandomlyCreatedAI
"2024-06-11T23:00:51Z"
0
0
null
[ "region:us" ]
null
"2024-06-11T22:59:50Z"
Entry not found
frankmurray/prince
frankmurray
"2024-06-11T23:03:16Z"
0
0
null
[ "license:openrail", "region:us" ]
null
"2024-06-11T23:01:23Z"
--- license: openrail ---
SauravMaheshkar/simclrv2-imagenet1k-r101_1x_sk1
SauravMaheshkar
"2024-06-11T23:04:29Z"
0
0
null
[ "self-supervised learning", "dataset:ILSVRC/imagenet-1k", "arxiv:2006.10029", "license:apache-2.0", "region:us" ]
null
"2024-06-11T23:03:04Z"
--- license: apache-2.0 datasets: - ILSVRC/imagenet-1k metrics: - accuracy tags: - self-supervised learning --- Official PyTorch converted weights of [SimCLRv2](https://arxiv.org/abs/2006.10029). Conversion script from [Separius/SimCLRv2-Pytorch](https://github.com/Separius/SimCLRv2-Pytorch) ```misc @article{chen2020big, title={Big Self-Supervised Models are Strong Semi-Supervised Learners}, author={Chen, Ting and Kornblith, Simon and Swersky, Kevin and Norouzi, Mohammad and Hinton, Geoffrey}, journal={arXiv preprint arXiv:2006.10029}, year={2020} } ```
alexzarate/tess_fenn-v0.2
alexzarate
"2024-06-11T23:16:49Z"
0
0
null
[ "region:us" ]
null
"2024-06-11T23:05:46Z"
Entry not found
DaynRedrawn/aisuejiawuoiuio239xzc
DaynRedrawn
"2024-06-12T05:01:56Z"
0
0
null
[ "region:us" ]
null
"2024-06-11T23:09:34Z"
Entry not found
hdve/google-gemma-2b-1718147435
hdve
"2024-06-11T23:12:57Z"
0
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-06-11T23:10:37Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Masioki/fusion_asrtbsc_distilbert-uncased-best
Masioki
"2024-06-17T19:15:14Z"
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "fusion-cross-attention-sentence-classifier", "generated_from_trainer", "en", "dataset:asapp/slue-phase-2", "model-index", "endpoints_compatible", "region:us" ]
null
"2024-06-11T23:11:26Z"
--- tags: - generated_from_trainer model-index: - name: fusion_asrtbsc_distilbert-uncased-best results: - task: type: dialogue act classification dataset: name: asapp/slue-phase-2 type: hvb metrics: - name: F1 macro E2E type: F1 macro value: 72.22 - name: F1 macro GT type: F1 macro value: 72.29 datasets: - asapp/slue-phase-2 language: - en metrics: - f1-macro --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # fusion_asrtbsc_distilbert-uncased-best ASR transcripts with prosody encoding and ASR encoding residual cross attention fusion multi-label DAC ## Model description ASR encoder: [Whisper small](https://huggingface.co/openai/whisper-small) encoder Prosody encoder: 2 layer transformer encoder with initial dense projection Backbone: [DistilBert uncased](https://huggingface.co/distilbert/distilbert-base-uncased) Fusion: 2 residual cross attention fusion layers (F_asr x F_text and F_prosody x F_text) with dense layer on top Pooling: Self attention Multi-label classification head: 2 dense layers with two dropouts 0.3 and Tanh activation inbetween ## Training and evaluation data Trained on ASR transcripts. Evaluated on ground truth (GT) and normalized [Whisper small](https://huggingface.co/openai/whisper-small) transcripts (E2E). ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.00043 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.19.2 - Tokenizers 0.19.1
Kofimicheals/Baloq
Kofimicheals
"2024-06-11T23:12:32Z"
0
0
null
[ "region:us" ]
null
"2024-06-11T23:12:32Z"
Entry not found
yzhuang/gemma-1.1-7b-it_fictional_French_v1
yzhuang
"2024-06-11T23:15:56Z"
0
0
null
[ "region:us" ]
null
"2024-06-11T23:15:56Z"
Entry not found
allenai/tulu-v2.5-13b-chatbot-arena-2023-rm
allenai
"2024-06-14T02:05:39Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-classification", "en", "dataset:allenai/tulu-2.5-preference-data", "dataset:allenai/tulu-v2-sft-mixture", "arxiv:2406.09279", "base_model:allenai/tulu-2-13b", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-classification
"2024-06-11T23:17:57Z"
--- model-index: - name: tulu-v2.5-13b-chatbot-arena-2023-rm results: [] datasets: - allenai/tulu-2.5-preference-data - allenai/tulu-v2-sft-mixture language: - en base_model: allenai/tulu-2-13b license: apache-2.0 --- <center> <img src="https://huggingface.co/datasets/allenai/blog-images/resolve/main/tulu-2.5/tulu_25_banner.png" alt="Tulu 2.5 banner image" width="800px"/> </center> # Model Card for Tulu V2.5 13B RM - Chatbot Arena 2023 Tulu is a series of language models that are trained to act as helpful assistants. Tulu V2.5 is a series of models trained using DPO and PPO starting from the [Tulu 2 suite](https://huggingface.co/collections/allenai/tulu-v2-suite-6551b56e743e6349aab45101). This is a reward model used for PPO training trained on the Chatbot Arena 2023 (Chatbot Arena conversations) dataset. It was used to train [this](https://huggingface.co/allenai/tulu-v2.5-ppo-13b-chatbot-arena-2023) model. For more details, read the paper: [Unpacking DPO and PPO: Disentangling Best Practices for Learning from Preference Feedback](https://arxiv.org/abs/2406.09279). ## .Model description - **Model type:** One model belonging to a suite of RLHF tuned chat models on a mix of publicly available, synthetic and human-created datasets. - **Language(s) (NLP):** English - **License:** Apache 2.0. - **Finetuned from model:** [meta-llama/Llama-2-13b-hf](https://huggingface.co/meta-llama/Llama-2-13b-hf) ### Model Sources - **Repository:** https://github.com/allenai/open-instruct - **Dataset:** Data used to train this model can be found [here](https://huggingface.co/datasets/allenai/tulu-2.5-preference-data) - specifically the `chatbot_arena_2023` split. - **Model Family:** The collection of related models can be found [here](https://huggingface.co/collections/allenai/tulu-v25-suite-66676520fd578080e126f618). ## Input Format The model is trained to use the following format (note the newlines): ``` <|user|> Your message here! <|assistant|> ``` For best results, format all inputs in this manner. **Make sure to include a newline after `<|assistant|>`, this can affect generation quality quite a bit.** We have included a [chat template](https://huggingface.co/docs/transformers/main/en/chat_templating) in the tokenizer implementing this template. ## Intended uses & limitations The model was initially fine-tuned on a filtered and preprocessed of the [Tulu V2 mix dataset](https://huggingface.co/datasets/allenai/tulu-v2-sft-mixture), which contains a diverse range of human created instructions and synthetic dialogues generated primarily by other LLMs. We then further trained the model with a [Jax RM trainer](https://github.com/hamishivi/EasyLM/blob/main/EasyLM/models/llama/llama_train_rm.py) built on [EasyLM](https://github.com/young-geng/EasyLM) on the dataset mentioned above. This model is meant as a research artefact. ### Training hyperparameters The following hyperparameters were used during PPO training: - learning_rate: 1e-06 - total_train_batch_size: 512 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear cooldown to 1e-05. - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 1.0 ## Citation If you find Tulu 2.5 is useful in your work, please cite it with: ``` @misc{ivison2024unpacking, title={{Unpacking DPO and PPO: Disentangling Best Practices for Learning from Preference Feedback}}, author={{Hamish Ivison and Yizhong Wang and Jiacheng Liu and Ellen Wu and Valentina Pyatkin and Nathan Lambert and Yejin Choi and Noah A. Smith and Hannaneh Hajishirzi}} year={2024}, eprint={2406.09279}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
floch0189/135
floch0189
"2024-06-11T23:22:22Z"
0
0
null
[ "region:us" ]
null
"2024-06-11T23:21:42Z"
Entry not found
Myriam123/tun_model
Myriam123
"2024-06-11T23:22:31Z"
0
0
null
[ "region:us" ]
null
"2024-06-11T23:22:31Z"
Entry not found
nannnzk/google-gemma-7b-1718148212
nannnzk
"2024-06-11T23:24:04Z"
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:google/gemma-7b", "region:us" ]
null
"2024-06-11T23:23:32Z"
--- library_name: peft base_model: google/gemma-7b --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.11.1
Augusto777/vit-base-patch16-224-ve-U10-40
Augusto777
"2024-06-11T23:41:39Z"
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
"2024-06-11T23:24:38Z"
Entry not found
floch0189/Eva16Lite201k
floch0189
"2024-06-11T23:25:31Z"
0
0
null
[ "region:us" ]
null
"2024-06-11T23:25:14Z"
Entry not found
Grayx/john_paul_van_damme_8
Grayx
"2024-06-11T23:40:23Z"
0
0
null
[ "region:us" ]
null
"2024-06-11T23:28:22Z"
Entry not found
ncabrera97/HlnPrr
ncabrera97
"2024-06-11T23:37:46Z"
0
0
null
[ "region:us" ]
null
"2024-06-11T23:33:38Z"
Entry not found
haturusinghe/xlm_r_base-finetuned_after_mrp-v2-royal-violet-7
haturusinghe
"2024-06-11T23:36:33Z"
0
0
null
[ "region:us" ]
null
"2024-06-11T23:36:33Z"
Entry not found
2024takelucrativo/2024warren1
2024takelucrativo
"2024-06-12T00:59:35Z"
0
0
null
[ "region:us" ]
null
"2024-06-11T23:37:09Z"
Entry not found
SauravMaheshkar/simclrv2-imagenet1k-r101_2x_sk0
SauravMaheshkar
"2024-06-11T23:38:38Z"
0
0
null
[ "self-supervised learning", "dataset:ILSVRC/imagenet-1k", "arxiv:2006.10029", "license:apache-2.0", "region:us" ]
null
"2024-06-11T23:37:49Z"
--- license: apache-2.0 datasets: - ILSVRC/imagenet-1k metrics: - accuracy tags: - self-supervised learning --- Official PyTorch converted weights of [SimCLRv2](https://arxiv.org/abs/2006.10029). Conversion script from [Separius/SimCLRv2-Pytorch](https://github.com/Separius/SimCLRv2-Pytorch) ```misc @article{chen2020big, title={Big Self-Supervised Models are Strong Semi-Supervised Learners}, author={Chen, Ting and Kornblith, Simon and Swersky, Kevin and Norouzi, Mohammad and Hinton, Geoffrey}, journal={arXiv preprint arXiv:2006.10029}, year={2020} } ```
SauravMaheshkar/simclrv2-imagenet1k-r101_2x_sk1
SauravMaheshkar
"2024-06-11T23:42:58Z"
0
0
null
[ "self-supervised learning", "dataset:ILSVRC/imagenet-1k", "arxiv:2006.10029", "license:apache-2.0", "region:us" ]
null
"2024-06-11T23:39:11Z"
--- license: apache-2.0 datasets: - ILSVRC/imagenet-1k metrics: - accuracy tags: - self-supervised learning --- Official PyTorch converted weights of [SimCLRv2](https://arxiv.org/abs/2006.10029). Conversion script from [Separius/SimCLRv2-Pytorch](https://github.com/Separius/SimCLRv2-Pytorch) ```misc @article{chen2020big, title={Big Self-Supervised Models are Strong Semi-Supervised Learners}, author={Chen, Ting and Kornblith, Simon and Swersky, Kevin and Norouzi, Mohammad and Hinton, Geoffrey}, journal={arXiv preprint arXiv:2006.10029}, year={2020} } ```
Grayx/john_paul_van_damme_9
Grayx
"2024-06-11T23:39:52Z"
0
0
null
[ "region:us" ]
null
"2024-06-11T23:39:45Z"
Entry not found
SayanoAI/RVC-models
SayanoAI
"2024-06-11T23:56:42Z"
0
0
null
[ "license:openrail", "region:us" ]
null
"2024-06-11T23:40:45Z"
--- license: openrail ---
pinkamype/ModelsXL
pinkamype
"2024-06-12T00:27:53Z"
0
0
null
[ "region:us" ]
null
"2024-06-11T23:41:54Z"
Entry not found
SauravMaheshkar/simclrv2-imagenet1k-r152_1x_sk0
SauravMaheshkar
"2024-06-11T23:46:12Z"
0
0
null
[ "self-supervised learning", "dataset:ILSVRC/imagenet-1k", "arxiv:2006.10029", "license:apache-2.0", "region:us" ]
null
"2024-06-11T23:43:31Z"
--- license: apache-2.0 datasets: - ILSVRC/imagenet-1k metrics: - accuracy tags: - self-supervised learning --- Official PyTorch converted weights of [SimCLRv2](https://arxiv.org/abs/2006.10029). Conversion script from [Separius/SimCLRv2-Pytorch](https://github.com/Separius/SimCLRv2-Pytorch) ```misc @article{chen2020big, title={Big Self-Supervised Models are Strong Semi-Supervised Learners}, author={Chen, Ting and Kornblith, Simon and Swersky, Kevin and Norouzi, Mohammad and Hinton, Geoffrey}, journal={arXiv preprint arXiv:2006.10029}, year={2020} } ```
impossibleexchange/ommm1
impossibleexchange
"2024-06-12T19:03:15Z"
0
0
null
[ "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
null
"2024-06-11T23:45:29Z"
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
SauravMaheshkar/simclrv2-imagenet1k-r152_1x_sk1
SauravMaheshkar
"2024-06-11T23:49:56Z"
0
0
null
[ "self-supervised learning", "dataset:ILSVRC/imagenet-1k", "arxiv:2006.10029", "license:apache-2.0", "region:us" ]
null
"2024-06-11T23:46:42Z"
--- license: apache-2.0 datasets: - ILSVRC/imagenet-1k metrics: - accuracy tags: - self-supervised learning --- Official PyTorch converted weights of [SimCLRv2](https://arxiv.org/abs/2006.10029). Conversion script from [Separius/SimCLRv2-Pytorch](https://github.com/Separius/SimCLRv2-Pytorch) ```misc @article{chen2020big, title={Big Self-Supervised Models are Strong Semi-Supervised Learners}, author={Chen, Ting and Kornblith, Simon and Swersky, Kevin and Norouzi, Mohammad and Hinton, Geoffrey}, journal={arXiv preprint arXiv:2006.10029}, year={2020} } ```
SauravMaheshkar/simclrv2-imagenet1k-r152_2x_sk0
SauravMaheshkar
"2024-06-11T23:54:53Z"
0
0
null
[ "self-supervised learning", "dataset:ILSVRC/imagenet-1k", "arxiv:2006.10029", "license:apache-2.0", "region:us" ]
null
"2024-06-11T23:50:58Z"
--- license: apache-2.0 datasets: - ILSVRC/imagenet-1k metrics: - accuracy tags: - self-supervised learning --- Official PyTorch converted weights of [SimCLRv2](https://arxiv.org/abs/2006.10029). Conversion script from [Separius/SimCLRv2-Pytorch](https://github.com/Separius/SimCLRv2-Pytorch) ```misc @article{chen2020big, title={Big Self-Supervised Models are Strong Semi-Supervised Learners}, author={Chen, Ting and Kornblith, Simon and Swersky, Kevin and Norouzi, Mohammad and Hinton, Geoffrey}, journal={arXiv preprint arXiv:2006.10029}, year={2020} } ```
tinutmap/my_awesome_model_tf
tinutmap
"2024-06-11T23:51:48Z"
0
0
null
[ "region:us" ]
null
"2024-06-11T23:51:48Z"
Entry not found
erectiled/zane
erectiled
"2024-06-11T23:54:42Z"
0
0
null
[ "region:us" ]
null
"2024-06-11T23:54:42Z"
Entry not found
iamanaiart/LCM-disneyPixarCartoon_v10-openvino
iamanaiart
"2024-06-11T23:57:28Z"
0
0
null
[ "region:us" ]
null
"2024-06-11T23:55:15Z"
Entry not found
SauravMaheshkar/simclrv2-imagenet1k-r152_2x_sk1
SauravMaheshkar
"2024-06-12T00:00:28Z"
0
0
null
[ "self-supervised learning", "dataset:ILSVRC/imagenet-1k", "arxiv:2006.10029", "license:apache-2.0", "region:us" ]
null
"2024-06-11T23:55:19Z"
--- license: apache-2.0 datasets: - ILSVRC/imagenet-1k metrics: - accuracy tags: - self-supervised learning --- Official PyTorch converted weights of [SimCLRv2](https://arxiv.org/abs/2006.10029). Conversion script from [Separius/SimCLRv2-Pytorch](https://github.com/Separius/SimCLRv2-Pytorch) ```misc @article{chen2020big, title={Big Self-Supervised Models are Strong Semi-Supervised Learners}, author={Chen, Ting and Kornblith, Simon and Swersky, Kevin and Norouzi, Mohammad and Hinton, Geoffrey}, journal={arXiv preprint arXiv:2006.10029}, year={2020} } ```