modelId
stringlengths
5
122
author
stringlengths
2
42
last_modified
unknown
downloads
int64
0
738M
likes
int64
0
11k
library_name
stringclasses
245 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
48 values
createdAt
unknown
card
stringlengths
1
901k
AriaRahmati1/222ghesmat7part2
AriaRahmati1
"2024-06-22T16:31:55Z"
0
0
null
[ "license:openrail", "region:us" ]
null
"2024-06-22T16:24:31Z"
--- license: openrail ---
luisthedragon/test-model-1
luisthedragon
"2024-06-22T16:24:52Z"
0
0
null
[ "region:us" ]
null
"2024-06-22T16:24:52Z"
Entry not found
latthawat/cook
latthawat
"2024-06-22T16:27:37Z"
0
0
null
[ "region:us" ]
null
"2024-06-22T16:27:37Z"
Entry not found
JavierVS/C1
JavierVS
"2024-06-22T16:27:50Z"
0
0
null
[ "license:mit", "region:us" ]
null
"2024-06-22T16:27:50Z"
--- license: mit ---
b-fujino/LUM_int8
b-fujino
"2024-06-22T16:38:11Z"
0
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-06-22T16:29:35Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
tqfang229/deberta-v3-large-com2-atomic
tqfang229
"2024-06-22T16:33:14Z"
0
0
transformers
[ "transformers", "pytorch", "tensorboard", "deberta-v2", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
"2024-06-22T16:31:23Z"
Entry not found
HAMZABZ/mistral_fine_tuned236
HAMZABZ
"2024-06-22T16:31:37Z"
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-06-22T16:31:32Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
HAMZABZ/mistral_fine_tuned221
HAMZABZ
"2024-06-22T16:33:26Z"
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-06-22T16:33:21Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
aerainyourarea/S5Yooyeon
aerainyourarea
"2024-06-22T16:38:57Z"
0
0
null
[ "license:openrail", "region:us" ]
null
"2024-06-22T16:36:10Z"
--- license: openrail ---
kmcls/first
kmcls
"2024-06-22T16:36:58Z"
0
0
null
[ "region:us" ]
null
"2024-06-22T16:36:58Z"
Entry not found
Kakapoor/llava-v1.5-13b-task-lora-618
Kakapoor
"2024-06-22T17:30:04Z"
0
0
null
[ "safetensors", "region:us" ]
null
"2024-06-22T16:42:14Z"
Entry not found
tqfang229/llama-2-7b-p_2i_chatgpt
tqfang229
"2024-06-22T16:53:28Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "license:llama2", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-06-22T16:48:15Z"
--- license: llama2 ---
Sergi1700/Melisa
Sergi1700
"2024-06-22T16:49:47Z"
0
0
null
[ "license:apache-2.0", "region:us" ]
null
"2024-06-22T16:49:47Z"
--- license: apache-2.0 ---
AriaRahmati1/222ghesmat8part1
AriaRahmati1
"2024-06-22T17:00:56Z"
0
0
null
[ "license:openrail", "region:us" ]
null
"2024-06-22T16:52:09Z"
--- license: openrail ---
tqfang229/llama-2-7b-p_2i
tqfang229
"2024-06-22T17:00:50Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "license:llama2", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-06-22T16:53:52Z"
--- license: llama2 ---
andreeadumitru/liar_bert
andreeadumitru
"2024-06-22T17:29:22Z"
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-06-22T16:54:16Z"
--- license: apache-2.0 base_model: distilbert/distilbert-base-uncased tags: - generated_from_trainer model-index: - name: liar_bert results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # liar_bert This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 256 - eval_batch_size: 256 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 30 ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
tqfang229/llama-2-7b-atomic2020
tqfang229
"2024-06-22T17:01:07Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "license:llama2", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-06-22T16:54:36Z"
--- license: llama2 ---
Coolllll/RonnieRuysdael
Coolllll
"2024-06-22T16:58:34Z"
0
0
null
[ "region:us" ]
null
"2024-06-22T16:57:13Z"
Entry not found
mimiklee/t5-small-finetuned-xsum
mimiklee
"2024-06-22T16:57:39Z"
0
0
null
[ "region:us" ]
null
"2024-06-22T16:57:39Z"
Entry not found
jensongui/new-dummy-model
jensongui
"2024-06-22T17:03:17Z"
0
0
null
[ "region:us" ]
null
"2024-06-22T16:58:44Z"
Entry not found
itay-nakash/model_5da0492152_sweep_comfy-snowflake-793
itay-nakash
"2024-06-22T17:04:56Z"
0
0
null
[ "region:us" ]
null
"2024-06-22T17:04:56Z"
Entry not found
nqv2291/mt0_base-sft-open_ner_en_only-remake
nqv2291
"2024-06-22T17:05:22Z"
0
0
null
[ "region:us" ]
null
"2024-06-22T17:05:22Z"
Entry not found
AriaRahmati1/222ghesmat8part2
AriaRahmati1
"2024-06-22T17:45:28Z"
0
0
null
[ "license:openrail", "region:us" ]
null
"2024-06-22T17:06:47Z"
--- license: openrail ---
itay-nakash/model_e4ad58a464_sweep_glorious-serenity-794
itay-nakash
"2024-06-22T17:07:01Z"
0
0
null
[ "region:us" ]
null
"2024-06-22T17:07:01Z"
Entry not found
saicharan8/telugu_bert_2
saicharan8
"2024-06-22T17:10:00Z"
0
0
transformers
[ "transformers", "safetensors", "roberta", "fill-mask", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
"2024-06-22T17:09:48Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
hchcsuim/batch-size16_FFPP-raw_opencv-1FPS_faces-expand50-aligned_unaugmentation
hchcsuim
"2024-06-22T17:36:28Z"
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "swin", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:microsoft/swin-tiny-patch4-window7-224", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
"2024-06-22T17:11:40Z"
--- license: apache-2.0 base_model: microsoft/swin-tiny-patch4-window7-224 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy - precision - recall - f1 model-index: - name: batch-size16_FFPP-raw_opencv-1FPS_faces-expand50-aligned_unaugmentation results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: test args: default metrics: - name: Accuracy type: accuracy value: 0.9550033422067946 - name: Precision type: precision value: 0.958071187230572 - name: Recall type: recall value: 0.9856464348321172 - name: F1 type: f1 value: 0.9716632079582296 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # batch-size16_FFPP-raw_opencv-1FPS_faces-expand50-aligned_unaugmentation This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.1154 - Accuracy: 0.9550 - Precision: 0.9581 - Recall: 0.9856 - F1: 0.9717 - Roc Auc: 0.9902 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | Roc Auc | |:-------------:|:------:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|:-------:| | 0.168 | 0.9996 | 1332 | 0.1154 | 0.9550 | 0.9581 | 0.9856 | 0.9717 | 0.9902 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.1 - Datasets 2.20.0 - Tokenizers 0.19.1
itay-nakash/model_e4ad58a464_sweep_dry-silence-795
itay-nakash
"2024-06-22T17:11:42Z"
0
0
null
[ "region:us" ]
null
"2024-06-22T17:11:42Z"
Entry not found
sajjad55/wsdbanglat5_2e4_MT0
sajjad55
"2024-06-22T17:51:58Z"
0
0
transformers
[ "transformers", "safetensors", "mt5", "text2text-generation", "generated_from_trainer", "base_model:bigscience/mt0-base", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
"2024-06-22T17:12:39Z"
--- license: apache-2.0 base_model: bigscience/mt0-base tags: - generated_from_trainer model-index: - name: wsdbanglat5_2e4_MT0 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wsdbanglat5_2e4_MT0 This model is a fine-tuned version of [bigscience/mt0-base](https://huggingface.co/bigscience/mt0-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0064 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 7 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 0.0142 | 1.0 | 1481 | 0.0122 | | 0.0092 | 2.0 | 2962 | 0.0072 | | 0.0078 | 3.0 | 4443 | 0.0060 | | 0.0049 | 4.0 | 5924 | 0.0057 | | 0.0026 | 5.0 | 7405 | 0.0057 | | 0.0013 | 6.0 | 8886 | 0.0065 | | 0.001 | 7.0 | 10367 | 0.0064 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
sherven/MMMAL
sherven
"2024-06-22T17:12:59Z"
0
0
null
[ "license:openrail", "region:us" ]
null
"2024-06-22T17:12:59Z"
--- license: openrail ---
staturecrane/image-gen-16m
staturecrane
"2024-06-22T21:30:41Z"
0
0
null
[ "region:us" ]
null
"2024-06-22T17:15:35Z"
Entry not found
TommyBushetta/Vbh
TommyBushetta
"2024-06-22T17:31:23Z"
0
0
espnet
[ "espnet", "art", "fill-mask", "ar", "dataset:nvidia/HelpSteer2", "license:apache-2.0", "region:us" ]
fill-mask
"2024-06-22T17:27:07Z"
--- license: apache-2.0 datasets: - nvidia/HelpSteer2 language: - ar metrics: - charcut_mt library_name: espnet pipeline_tag: fill-mask tags: - art ---
hchcsuim/batch-size16_Celeb-DF_opencv-1FPS_faces-expand40-aligned_unaugmentation
hchcsuim
"2024-06-22T17:37:40Z"
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "swin", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:microsoft/swin-tiny-patch4-window7-224", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
"2024-06-22T17:27:33Z"
--- license: apache-2.0 base_model: microsoft/swin-tiny-patch4-window7-224 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy - precision - recall - f1 model-index: - name: batch-size16_Celeb-DF_opencv-1FPS_faces-expand40-aligned_unaugmentation results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: test args: default metrics: - name: Accuracy type: accuracy value: 0.9438615721891638 - name: Precision type: precision value: 0.9452811692362535 - name: Recall type: recall value: 0.9903828197945845 - name: F1 type: f1 value: 0.967306552368793 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # batch-size16_Celeb-DF_opencv-1FPS_faces-expand40-aligned_unaugmentation This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.1504 - Accuracy: 0.9439 - Precision: 0.9453 - Recall: 0.9904 - F1: 0.9673 - Roc Auc: 0.9728 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | Roc Auc | |:-------------:|:------:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|:-------:| | 0.2003 | 0.9962 | 199 | 0.1504 | 0.9439 | 0.9453 | 0.9904 | 0.9673 | 0.9728 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.1 - Datasets 2.20.0 - Tokenizers 0.19.1
woweenie/v71-ds21-curated2-3e5cos-cd0.02-embeddingperturb1-3k-half
woweenie
"2024-06-22T17:31:02Z"
0
0
diffusers
[ "diffusers", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
"2024-06-22T17:28:09Z"
Entry not found
mrunalmania/palligemma-cord-base
mrunalmania
"2024-06-28T18:03:37Z"
0
0
transformers
[ "transformers", "safetensors", "NLP", "ComputerVision", "image-to-text", "en", "arxiv:1910.09700", "license:mit", "endpoints_compatible", "region:us" ]
image-to-text
"2024-06-22T17:30:06Z"
--- library_name: transformers tags: - NLP - ComputerVision license: mit language: - en metrics: - accuracy pipeline_tag: image-to-text --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** Mrunal Ashwinbhai Maniya (Arizona State University), Harshkumar Navadiya (NewYork University), Deep Jiteshkumar Sakhiya (NewYork University), Neel Savani (Stevens Institute of Technology) - **Funded by [optional]:** By Self - **Model type:** - **Language(s) (NLP):** [More Information Needed] - **License:** MIT - **Finetuned from model [optional]:** Google Gemma ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
raiyan007/huggingface-presized
raiyan007
"2024-06-22T17:40:03Z"
0
0
null
[ "region:us" ]
null
"2024-06-22T17:34:34Z"
Entry not found
Adesh298/example
Adesh298
"2024-06-22T17:37:57Z"
0
0
null
[ "license:mit", "region:us" ]
null
"2024-06-22T17:37:57Z"
--- license: mit ---
AdithyaSK/paligemma_vqav2
AdithyaSK
"2024-06-22T17:56:36Z"
0
0
peft
[ "peft", "safetensors", "generated_from_trainer", "dataset:vq_av2", "base_model:google/paligemma-3b-pt-224", "license:gemma", "region:us" ]
null
"2024-06-22T17:38:00Z"
--- base_model: google/paligemma-3b-pt-224 datasets: - vq_av2 library_name: peft license: gemma tags: - generated_from_trainer model-index: - name: paligemma_vqav2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/cognitive-lab/huggingface/runs/d0v5ycnb) # paligemma_vqav2 This model is a fine-tuned version of [google/paligemma-3b-pt-224](https://huggingface.co/google/paligemma-3b-pt-224) on the vq_av2 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2 - num_epochs: 2 ### Training results ### Framework versions - PEFT 0.11.1 - Transformers 4.42.0.dev0 - Pytorch 2.3.1+cu118 - Datasets 2.20.0 - Tokenizers 0.19.1
amruth-2005/AI-WORKSHOP
amruth-2005
"2024-06-22T17:38:04Z"
0
0
null
[ "region:us" ]
null
"2024-06-22T17:38:04Z"
Entry not found
msrishav28/DreamAI-28
msrishav28
"2024-06-22T17:40:07Z"
0
0
null
[ "region:us" ]
null
"2024-06-22T17:40:07Z"
Entry not found
IslemTouati/scene_segmentation
IslemTouati
"2024-06-23T05:58:20Z"
0
0
transformers
[ "transformers", "tf", "segformer", "generated_from_keras_callback", "base_model:nvidia/mit-b0", "license:other", "endpoints_compatible", "region:us" ]
null
"2024-06-22T17:40:10Z"
--- license: other base_model: nvidia/mit-b0 tags: - generated_from_keras_callback model-index: - name: IslemTouati/scene_segmentation results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # IslemTouati/scene_segmentation This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: nan - Validation Loss: nan - Validation Mean Iou: 0.0038 - Validation Mean Accuracy: 0.0238 - Validation Overall Accuracy: 0.1957 - Validation Accuracy Wall: 1.0 - Validation Accuracy Building: 0.0 - Validation Accuracy Sky: 0.0 - Validation Accuracy Floor: 0.0 - Validation Accuracy Tree: 0.0 - Validation Accuracy Ceiling: 0.0 - Validation Accuracy Road: nan - Validation Accuracy Bed : 0.0 - Validation Accuracy Windowpane: 0.0 - Validation Accuracy Grass: 0.0 - Validation Accuracy Cabinet: 0.0 - Validation Accuracy Sidewalk: 0.0 - Validation Accuracy Person: 0.0 - Validation Accuracy Earth: nan - Validation Accuracy Door: 0.0 - Validation Accuracy Table: 0.0 - Validation Accuracy Mountain: 0.0 - Validation Accuracy Plant: 0.0 - Validation Accuracy Curtain: 0.0 - Validation Accuracy Chair: 0.0 - Validation Accuracy Car: 0.0 - Validation Accuracy Water: 0.0 - Validation Accuracy Painting: nan - Validation Accuracy Sofa: nan - Validation Accuracy Shelf: 0.0 - Validation Accuracy House: nan - Validation Accuracy Sea: nan - Validation Accuracy Mirror: nan - Validation Accuracy Rug: 0.0 - Validation Accuracy Field: nan - Validation Accuracy Armchair: nan - Validation Accuracy Seat: 0.0 - Validation Accuracy Fence: nan - Validation Accuracy Desk: 0.0 - Validation Accuracy Rock: nan - Validation Accuracy Wardrobe: nan - Validation Accuracy Lamp: nan - Validation Accuracy Bathtub: nan - Validation Accuracy Railing: nan - Validation Accuracy Cushion: nan - Validation Accuracy Base: nan - Validation Accuracy Box: nan - Validation Accuracy Column: 0.0 - Validation Accuracy Signboard: 0.0 - Validation Accuracy Chest of drawers: nan - Validation Accuracy Counter: 0.0 - Validation Accuracy Sand: nan - Validation Accuracy Sink: nan - Validation Accuracy Skyscraper: nan - Validation Accuracy Fireplace: 0.0 - Validation Accuracy Refrigerator: nan - Validation Accuracy Grandstand: 0.0 - Validation Accuracy Path: nan - Validation Accuracy Stairs: nan - Validation Accuracy Runway: nan - Validation Accuracy Case: nan - Validation Accuracy Pool table: nan - Validation Accuracy Pillow: nan - Validation Accuracy Screen door: nan - Validation Accuracy Stairway: 0.0 - Validation Accuracy River: nan - Validation Accuracy Bridge: nan - Validation Accuracy Bookcase: nan - Validation Accuracy Blind: nan - Validation Accuracy Coffee table: nan - Validation Accuracy Toilet: nan - Validation Accuracy Flower: nan - Validation Accuracy Book: 0.0 - Validation Accuracy Hill: nan - Validation Accuracy Bench: nan - Validation Accuracy Countertop: 0.0 - Validation Accuracy Stove: 0.0 - Validation Accuracy Palm: nan - Validation Accuracy Kitchen island: nan - Validation Accuracy Computer: nan - Validation Accuracy Swivel chair: 0.0 - Validation Accuracy Boat: nan - Validation Accuracy Bar: nan - Validation Accuracy Arcade machine: nan - Validation Accuracy Hovel: nan - Validation Accuracy Bus: nan - Validation Accuracy Towel: 0.0 - Validation Accuracy Light: nan - Validation Accuracy Truck: nan - Validation Accuracy Tower: nan - Validation Accuracy Chandelier: 0.0 - Validation Accuracy Awning: 0.0 - Validation Accuracy Streetlight: nan - Validation Accuracy Booth: nan - Validation Accuracy Television receiver: nan - Validation Accuracy Airplane: nan - Validation Accuracy Dirt track: nan - Validation Accuracy Apparel: nan - Validation Accuracy Pole: nan - Validation Accuracy Land: nan - Validation Accuracy Bannister: nan - Validation Accuracy Escalator: nan - Validation Accuracy Ottoman: nan - Validation Accuracy Bottle: nan - Validation Accuracy Buffet: nan - Validation Accuracy Poster: nan - Validation Accuracy Stage: nan - Validation Accuracy Van: nan - Validation Accuracy Ship: nan - Validation Accuracy Fountain: nan - Validation Accuracy Conveyer belt: nan - Validation Accuracy Canopy: nan - Validation Accuracy Washer: nan - Validation Accuracy Plaything: nan - Validation Accuracy Swimming pool: nan - Validation Accuracy Stool: nan - Validation Accuracy Barrel: nan - Validation Accuracy Basket: nan - Validation Accuracy Waterfall: nan - Validation Accuracy Tent: nan - Validation Accuracy Bag: 0.0 - Validation Accuracy Minibike: nan - Validation Accuracy Cradle: nan - Validation Accuracy Oven: nan - Validation Accuracy Ball: nan - Validation Accuracy Food: nan - Validation Accuracy Step: nan - Validation Accuracy Tank: nan - Validation Accuracy Trade name: 0.0 - Validation Accuracy Microwave: nan - Validation Accuracy Pot: nan - Validation Accuracy Animal: 0.0 - Validation Accuracy Bicycle: nan - Validation Accuracy Lake: nan - Validation Accuracy Dishwasher: nan - Validation Accuracy Screen: nan - Validation Accuracy Blanket: nan - Validation Accuracy Sculpture: 0.0 - Validation Accuracy Hood: nan - Validation Accuracy Sconce: nan - Validation Accuracy Vase: 0.0 - Validation Accuracy Traffic light: nan - Validation Accuracy Tray: nan - Validation Accuracy Ashcan: nan - Validation Accuracy Fan: nan - Validation Accuracy Pier: nan - Validation Accuracy Crt screen: nan - Validation Accuracy Plate: nan - Validation Accuracy Monitor: nan - Validation Accuracy Bulletin board: nan - Validation Accuracy Shower: nan - Validation Accuracy Radiator: nan - Validation Accuracy Glass: nan - Validation Accuracy Clock: nan - Validation Accuracy Flag: nan - Validation Iou Wall: 0.1579 - Validation Iou Building: 0.0 - Validation Iou Sky: 0.0 - Validation Iou Floor: 0.0 - Validation Iou Tree: 0.0 - Validation Iou Ceiling: 0.0 - Validation Iou Road: nan - Validation Iou Bed : 0.0 - Validation Iou Windowpane: 0.0 - Validation Iou Grass: 0.0 - Validation Iou Cabinet: 0.0 - Validation Iou Sidewalk: 0.0 - Validation Iou Person: 0.0 - Validation Iou Earth: nan - Validation Iou Door: 0.0 - Validation Iou Table: 0.0 - Validation Iou Mountain: 0.0 - Validation Iou Plant: 0.0 - Validation Iou Curtain: 0.0 - Validation Iou Chair: 0.0 - Validation Iou Car: 0.0 - Validation Iou Water: 0.0 - Validation Iou Painting: nan - Validation Iou Sofa: nan - Validation Iou Shelf: 0.0 - Validation Iou House: nan - Validation Iou Sea: nan - Validation Iou Mirror: nan - Validation Iou Rug: 0.0 - Validation Iou Field: nan - Validation Iou Armchair: nan - Validation Iou Seat: 0.0 - Validation Iou Fence: nan - Validation Iou Desk: 0.0 - Validation Iou Rock: nan - Validation Iou Wardrobe: nan - Validation Iou Lamp: nan - Validation Iou Bathtub: nan - Validation Iou Railing: nan - Validation Iou Cushion: nan - Validation Iou Base: nan - Validation Iou Box: nan - Validation Iou Column: 0.0 - Validation Iou Signboard: 0.0 - Validation Iou Chest of drawers: nan - Validation Iou Counter: 0.0 - Validation Iou Sand: nan - Validation Iou Sink: nan - Validation Iou Skyscraper: nan - Validation Iou Fireplace: 0.0 - Validation Iou Refrigerator: nan - Validation Iou Grandstand: 0.0 - Validation Iou Path: nan - Validation Iou Stairs: nan - Validation Iou Runway: nan - Validation Iou Case: nan - Validation Iou Pool table: nan - Validation Iou Pillow: nan - Validation Iou Screen door: nan - Validation Iou Stairway: 0.0 - Validation Iou River: nan - Validation Iou Bridge: nan - Validation Iou Bookcase: nan - Validation Iou Blind: nan - Validation Iou Coffee table: nan - Validation Iou Toilet: nan - Validation Iou Flower: nan - Validation Iou Book: 0.0 - Validation Iou Hill: nan - Validation Iou Bench: nan - Validation Iou Countertop: 0.0 - Validation Iou Stove: 0.0 - Validation Iou Palm: nan - Validation Iou Kitchen island: nan - Validation Iou Computer: nan - Validation Iou Swivel chair: 0.0 - Validation Iou Boat: nan - Validation Iou Bar: nan - Validation Iou Arcade machine: nan - Validation Iou Hovel: nan - Validation Iou Bus: nan - Validation Iou Towel: 0.0 - Validation Iou Light: nan - Validation Iou Truck: nan - Validation Iou Tower: nan - Validation Iou Chandelier: 0.0 - Validation Iou Awning: 0.0 - Validation Iou Streetlight: nan - Validation Iou Booth: nan - Validation Iou Television receiver: nan - Validation Iou Airplane: nan - Validation Iou Dirt track: nan - Validation Iou Apparel: nan - Validation Iou Pole: nan - Validation Iou Land: nan - Validation Iou Bannister: nan - Validation Iou Escalator: nan - Validation Iou Ottoman: nan - Validation Iou Bottle: nan - Validation Iou Buffet: nan - Validation Iou Poster: nan - Validation Iou Stage: nan - Validation Iou Van: nan - Validation Iou Ship: nan - Validation Iou Fountain: nan - Validation Iou Conveyer belt: nan - Validation Iou Canopy: nan - Validation Iou Washer: nan - Validation Iou Plaything: nan - Validation Iou Swimming pool: nan - Validation Iou Stool: nan - Validation Iou Barrel: nan - Validation Iou Basket: nan - Validation Iou Waterfall: nan - Validation Iou Tent: nan - Validation Iou Bag: 0.0 - Validation Iou Minibike: nan - Validation Iou Cradle: nan - Validation Iou Oven: nan - Validation Iou Ball: nan - Validation Iou Food: nan - Validation Iou Step: nan - Validation Iou Tank: nan - Validation Iou Trade name: 0.0 - Validation Iou Microwave: nan - Validation Iou Pot: nan - Validation Iou Animal: 0.0 - Validation Iou Bicycle: nan - Validation Iou Lake: nan - Validation Iou Dishwasher: nan - Validation Iou Screen: nan - Validation Iou Blanket: nan - Validation Iou Sculpture: 0.0 - Validation Iou Hood: nan - Validation Iou Sconce: nan - Validation Iou Vase: 0.0 - Validation Iou Traffic light: nan - Validation Iou Tray: nan - Validation Iou Ashcan: nan - Validation Iou Fan: nan - Validation Iou Pier: nan - Validation Iou Crt screen: nan - Validation Iou Plate: nan - Validation Iou Monitor: nan - Validation Iou Bulletin board: nan - Validation Iou Shower: nan - Validation Iou Radiator: nan - Validation Iou Glass: nan - Validation Iou Clock: nan - Validation Iou Flag: nan - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 6e-05, 'decay_steps': 40, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Validation Mean Iou | Validation Mean Accuracy | Validation Overall Accuracy | Validation Accuracy Wall | Validation Accuracy Building | Validation Accuracy Sky | Validation Accuracy Floor | Validation Accuracy Tree | Validation Accuracy Ceiling | Validation Accuracy Road | Validation Accuracy Bed | Validation Accuracy Windowpane | Validation Accuracy Grass | Validation Accuracy Cabinet | Validation Accuracy Sidewalk | Validation Accuracy Person | Validation Accuracy Earth | Validation Accuracy Door | Validation Accuracy Table | Validation Accuracy Mountain | Validation Accuracy Plant | Validation Accuracy Curtain | Validation Accuracy Chair | Validation Accuracy Car | Validation Accuracy Water | Validation Accuracy Painting | Validation Accuracy Sofa | Validation Accuracy Shelf | Validation Accuracy House | Validation Accuracy Sea | Validation Accuracy Mirror | Validation Accuracy Rug | Validation Accuracy Field | Validation Accuracy Armchair | Validation Accuracy Seat | Validation Accuracy Fence | Validation Accuracy Desk | Validation Accuracy Rock | Validation Accuracy Wardrobe | Validation Accuracy Lamp | Validation Accuracy Bathtub | Validation Accuracy Railing | Validation Accuracy Cushion | Validation Accuracy Base | Validation Accuracy Box | Validation Accuracy Column | Validation Accuracy Signboard | Validation Accuracy Chest of drawers | Validation Accuracy Counter | Validation Accuracy Sand | Validation Accuracy Sink | Validation Accuracy Skyscraper | Validation Accuracy Fireplace | Validation Accuracy Refrigerator | Validation Accuracy Grandstand | Validation Accuracy Path | Validation Accuracy Stairs | Validation Accuracy Runway | Validation Accuracy Case | Validation Accuracy Pool table | Validation Accuracy Pillow | Validation Accuracy Screen door | Validation Accuracy Stairway | Validation Accuracy River | Validation Accuracy Bridge | Validation Accuracy Bookcase | Validation Accuracy Blind | Validation Accuracy Coffee table | Validation Accuracy Toilet | Validation Accuracy Flower | Validation Accuracy Book | Validation Accuracy Hill | Validation Accuracy Bench | Validation Accuracy Countertop | Validation Accuracy Stove | Validation Accuracy Palm | Validation Accuracy Kitchen island | Validation Accuracy Computer | Validation Accuracy Swivel chair | Validation Accuracy Boat | Validation Accuracy Bar | Validation Accuracy Arcade machine | Validation Accuracy Hovel | Validation Accuracy Bus | Validation Accuracy Towel | Validation Accuracy Light | Validation Accuracy Truck | Validation Accuracy Tower | Validation Accuracy Chandelier | Validation Accuracy Awning | Validation Accuracy Streetlight | Validation Accuracy Booth | Validation Accuracy Television receiver | Validation Accuracy Airplane | Validation Accuracy Dirt track | Validation Accuracy Apparel | Validation Accuracy Pole | Validation Accuracy Land | Validation Accuracy Bannister | Validation Accuracy Escalator | Validation Accuracy Ottoman | Validation Accuracy Bottle | Validation Accuracy Buffet | Validation Accuracy Poster | Validation Accuracy Stage | Validation Accuracy Van | Validation Accuracy Ship | Validation Accuracy Fountain | Validation Accuracy Conveyer belt | Validation Accuracy Canopy | Validation Accuracy Washer | Validation Accuracy Plaything | Validation Accuracy Swimming pool | Validation Accuracy Stool | Validation Accuracy Barrel | Validation Accuracy Basket | Validation Accuracy Waterfall | Validation Accuracy Tent | Validation Accuracy Bag | Validation Accuracy Minibike | Validation Accuracy Cradle | Validation Accuracy Oven | Validation Accuracy Ball | Validation Accuracy Food | Validation Accuracy Step | Validation Accuracy Tank | Validation Accuracy Trade name | Validation Accuracy Microwave | Validation Accuracy Pot | Validation Accuracy Animal | Validation Accuracy Bicycle | Validation Accuracy Lake | Validation Accuracy Dishwasher | Validation Accuracy Screen | Validation Accuracy Blanket | Validation Accuracy Sculpture | Validation Accuracy Hood | Validation Accuracy Sconce | Validation Accuracy Vase | Validation Accuracy Traffic light | Validation Accuracy Tray | Validation Accuracy Ashcan | Validation Accuracy Fan | Validation Accuracy Pier | Validation Accuracy Crt screen | Validation Accuracy Plate | Validation Accuracy Monitor | Validation Accuracy Bulletin board | Validation Accuracy Shower | Validation Accuracy Radiator | Validation Accuracy Glass | Validation Accuracy Clock | Validation Accuracy Flag | Validation Iou Wall | Validation Iou Building | Validation Iou Sky | Validation Iou Floor | Validation Iou Tree | Validation Iou Ceiling | Validation Iou Road | Validation Iou Bed | Validation Iou Windowpane | Validation Iou Grass | Validation Iou Cabinet | Validation Iou Sidewalk | Validation Iou Person | Validation Iou Earth | Validation Iou Door | Validation Iou Table | Validation Iou Mountain | Validation Iou Plant | Validation Iou Curtain | Validation Iou Chair | Validation Iou Car | Validation Iou Water | Validation Iou Painting | Validation Iou Sofa | Validation Iou Shelf | Validation Iou House | Validation Iou Sea | Validation Iou Mirror | Validation Iou Rug | Validation Iou Field | Validation Iou Armchair | Validation Iou Seat | Validation Iou Fence | Validation Iou Desk | Validation Iou Rock | Validation Iou Wardrobe | Validation Iou Lamp | Validation Iou Bathtub | Validation Iou Railing | Validation Iou Cushion | Validation Iou Base | Validation Iou Box | Validation Iou Column | Validation Iou Signboard | Validation Iou Chest of drawers | Validation Iou Counter | Validation Iou Sand | Validation Iou Sink | Validation Iou Skyscraper | Validation Iou Fireplace | Validation Iou Refrigerator | Validation Iou Grandstand | Validation Iou Path | Validation Iou Stairs | Validation Iou Runway | Validation Iou Case | Validation Iou Pool table | Validation Iou Pillow | Validation Iou Screen door | Validation Iou Stairway | Validation Iou River | Validation Iou Bridge | Validation Iou Bookcase | Validation Iou Blind | Validation Iou Coffee table | Validation Iou Toilet | Validation Iou Flower | Validation Iou Book | Validation Iou Hill | Validation Iou Bench | Validation Iou Countertop | Validation Iou Stove | Validation Iou Palm | Validation Iou Kitchen island | Validation Iou Computer | Validation Iou Swivel chair | Validation Iou Boat | Validation Iou Bar | Validation Iou Arcade machine | Validation Iou Hovel | Validation Iou Bus | Validation Iou Towel | Validation Iou Light | Validation Iou Truck | Validation Iou Tower | Validation Iou Chandelier | Validation Iou Awning | Validation Iou Streetlight | Validation Iou Booth | Validation Iou Television receiver | Validation Iou Airplane | Validation Iou Dirt track | Validation Iou Apparel | Validation Iou Pole | Validation Iou Land | Validation Iou Bannister | Validation Iou Escalator | Validation Iou Ottoman | Validation Iou Bottle | Validation Iou Buffet | Validation Iou Poster | Validation Iou Stage | Validation Iou Van | Validation Iou Ship | Validation Iou Fountain | Validation Iou Conveyer belt | Validation Iou Canopy | Validation Iou Washer | Validation Iou Plaything | Validation Iou Swimming pool | Validation Iou Stool | Validation Iou Barrel | Validation Iou Basket | Validation Iou Waterfall | Validation Iou Tent | Validation Iou Bag | Validation Iou Minibike | Validation Iou Cradle | Validation Iou Oven | Validation Iou Ball | Validation Iou Food | Validation Iou Step | Validation Iou Tank | Validation Iou Trade name | Validation Iou Microwave | Validation Iou Pot | Validation Iou Animal | Validation Iou Bicycle | Validation Iou Lake | Validation Iou Dishwasher | Validation Iou Screen | Validation Iou Blanket | Validation Iou Sculpture | Validation Iou Hood | Validation Iou Sconce | Validation Iou Vase | Validation Iou Traffic light | Validation Iou Tray | Validation Iou Ashcan | Validation Iou Fan | Validation Iou Pier | Validation Iou Crt screen | Validation Iou Plate | Validation Iou Monitor | Validation Iou Bulletin board | Validation Iou Shower | Validation Iou Radiator | Validation Iou Glass | Validation Iou Clock | Validation Iou Flag | Epoch | |:----------:|:---------------:|:-------------------:|:------------------------:|:---------------------------:|:------------------------:|:----------------------------:|:-----------------------:|:-------------------------:|:------------------------:|:---------------------------:|:------------------------:|:------------------------:|:------------------------------:|:-------------------------:|:---------------------------:|:----------------------------:|:--------------------------:|:-------------------------:|:------------------------:|:-------------------------:|:----------------------------:|:-------------------------:|:---------------------------:|:-------------------------:|:-----------------------:|:-------------------------:|:----------------------------:|:------------------------:|:-------------------------:|:-------------------------:|:-----------------------:|:--------------------------:|:-----------------------:|:-------------------------:|:----------------------------:|:------------------------:|:-------------------------:|:------------------------:|:------------------------:|:----------------------------:|:------------------------:|:---------------------------:|:---------------------------:|:---------------------------:|:------------------------:|:-----------------------:|:--------------------------:|:-----------------------------:|:------------------------------------:|:---------------------------:|:------------------------:|:------------------------:|:------------------------------:|:-----------------------------:|:--------------------------------:|:------------------------------:|:------------------------:|:--------------------------:|:--------------------------:|:------------------------:|:------------------------------:|:--------------------------:|:-------------------------------:|:----------------------------:|:-------------------------:|:--------------------------:|:----------------------------:|:-------------------------:|:--------------------------------:|:--------------------------:|:--------------------------:|:------------------------:|:------------------------:|:-------------------------:|:------------------------------:|:-------------------------:|:------------------------:|:----------------------------------:|:----------------------------:|:--------------------------------:|:------------------------:|:-----------------------:|:----------------------------------:|:-------------------------:|:-----------------------:|:-------------------------:|:-------------------------:|:-------------------------:|:-------------------------:|:------------------------------:|:--------------------------:|:-------------------------------:|:-------------------------:|:---------------------------------------:|:----------------------------:|:------------------------------:|:---------------------------:|:------------------------:|:------------------------:|:-----------------------------:|:-----------------------------:|:---------------------------:|:--------------------------:|:--------------------------:|:--------------------------:|:-------------------------:|:-----------------------:|:------------------------:|:----------------------------:|:---------------------------------:|:--------------------------:|:--------------------------:|:-----------------------------:|:---------------------------------:|:-------------------------:|:--------------------------:|:--------------------------:|:-----------------------------:|:------------------------:|:-----------------------:|:----------------------------:|:--------------------------:|:------------------------:|:------------------------:|:------------------------:|:------------------------:|:------------------------:|:------------------------------:|:-----------------------------:|:-----------------------:|:--------------------------:|:---------------------------:|:------------------------:|:------------------------------:|:--------------------------:|:---------------------------:|:-----------------------------:|:------------------------:|:--------------------------:|:------------------------:|:---------------------------------:|:------------------------:|:--------------------------:|:-----------------------:|:------------------------:|:------------------------------:|:-------------------------:|:---------------------------:|:----------------------------------:|:--------------------------:|:----------------------------:|:-------------------------:|:-------------------------:|:------------------------:|:-------------------:|:-----------------------:|:------------------:|:--------------------:|:-------------------:|:----------------------:|:-------------------:|:-------------------:|:-------------------------:|:--------------------:|:----------------------:|:-----------------------:|:---------------------:|:--------------------:|:-------------------:|:--------------------:|:-----------------------:|:--------------------:|:----------------------:|:--------------------:|:------------------:|:--------------------:|:-----------------------:|:-------------------:|:--------------------:|:--------------------:|:------------------:|:---------------------:|:------------------:|:--------------------:|:-----------------------:|:-------------------:|:--------------------:|:-------------------:|:-------------------:|:-----------------------:|:-------------------:|:----------------------:|:----------------------:|:----------------------:|:-------------------:|:------------------:|:---------------------:|:------------------------:|:-------------------------------:|:----------------------:|:-------------------:|:-------------------:|:-------------------------:|:------------------------:|:---------------------------:|:-------------------------:|:-------------------:|:---------------------:|:---------------------:|:-------------------:|:-------------------------:|:---------------------:|:--------------------------:|:-----------------------:|:--------------------:|:---------------------:|:-----------------------:|:--------------------:|:---------------------------:|:---------------------:|:---------------------:|:-------------------:|:-------------------:|:--------------------:|:-------------------------:|:--------------------:|:-------------------:|:-----------------------------:|:-----------------------:|:---------------------------:|:-------------------:|:------------------:|:-----------------------------:|:--------------------:|:------------------:|:--------------------:|:--------------------:|:--------------------:|:--------------------:|:-------------------------:|:---------------------:|:--------------------------:|:--------------------:|:----------------------------------:|:-----------------------:|:-------------------------:|:----------------------:|:-------------------:|:-------------------:|:------------------------:|:------------------------:|:----------------------:|:---------------------:|:---------------------:|:---------------------:|:--------------------:|:------------------:|:-------------------:|:-----------------------:|:----------------------------:|:---------------------:|:---------------------:|:------------------------:|:----------------------------:|:--------------------:|:---------------------:|:---------------------:|:------------------------:|:-------------------:|:------------------:|:-----------------------:|:---------------------:|:-------------------:|:-------------------:|:-------------------:|:-------------------:|:-------------------:|:-------------------------:|:------------------------:|:------------------:|:---------------------:|:----------------------:|:-------------------:|:-------------------------:|:---------------------:|:----------------------:|:------------------------:|:-------------------:|:---------------------:|:-------------------:|:----------------------------:|:-------------------:|:---------------------:|:------------------:|:-------------------:|:-------------------------:|:--------------------:|:----------------------:|:-----------------------------:|:---------------------:|:-----------------------:|:--------------------:|:--------------------:|:-------------------:|:-----:| | nan | nan | 0.0038 | 0.0238 | 0.1957 | 1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | 0.0 | nan | nan | 0.0 | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | 0.0 | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | 0.0 | 0.0 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.1579 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | nan | nan | 0.0 | nan | nan | nan | 0.0 | nan | nan | 0.0 | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | 0.0 | nan | 0.0 | nan | nan | nan | 0.0 | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | 0.0 | 0.0 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | nan | nan | nan | 0.0 | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | 0.0 | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | 0 | ### Framework versions - Transformers 4.41.2 - TensorFlow 2.15.0 - Datasets 2.20.0 - Tokenizers 0.19.1
panxinyang/Qwen-Qwen1.5-0.5B-1719078179
panxinyang
"2024-06-22T17:43:02Z"
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:Qwen/Qwen1.5-0.5B", "region:us" ]
null
"2024-06-22T17:43:00Z"
--- base_model: Qwen/Qwen1.5-0.5B library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.11.1
chickenparmasean/test-model
chickenparmasean
"2024-06-22T17:50:02Z"
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-06-22T17:46:17Z"
Entry not found
azurehorizon/gemma-Code-Instruct-Finetune-test
azurehorizon
"2024-06-22T17:51:30Z"
0
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-06-22T17:46:33Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
silent666/Qwen-Qwen1.5-0.5B-1719078434
silent666
"2024-06-22T17:47:16Z"
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:Qwen/Qwen1.5-0.5B", "region:us" ]
null
"2024-06-22T17:47:14Z"
--- base_model: Qwen/Qwen1.5-0.5B library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.11.1
AriaRahmati1/222ghesmat8part3
AriaRahmati1
"2024-06-22T18:00:07Z"
0
0
null
[ "license:openrail", "region:us" ]
null
"2024-06-22T17:51:03Z"
--- license: openrail ---
CHE-72/Breeze-7B-Instruct-v1_0-Q5_K_S-GGUF
CHE-72
"2024-06-22T17:59:33Z"
0
0
null
[ "region:us" ]
null
"2024-06-22T17:59:33Z"
Entry not found
shuyuej/MedLLaMA3-70B-Spanish
shuyuej
"2024-06-24T02:56:32Z"
0
0
null
[ "safetensors", "license:apache-2.0", "region:us" ]
null
"2024-06-22T18:00:39Z"
--- license: apache-2.0 ---
shuyuej/MedMistral-MoE-Spanish
shuyuej
"2024-06-22T23:53:17Z"
0
0
null
[ "safetensors", "license:apache-2.0", "region:us" ]
null
"2024-06-22T18:01:07Z"
--- license: apache-2.0 ---
AriaRahmati1/222ghesmat9part1
AriaRahmati1
"2024-06-22T18:13:27Z"
0
0
null
[ "license:openrail", "region:us" ]
null
"2024-06-22T18:02:06Z"
--- license: openrail ---
Kakapoor/llava-v1.5-13b-task-lora-618_new
Kakapoor
"2024-06-22T19:49:35Z"
0
0
null
[ "safetensors", "region:us" ]
null
"2024-06-22T18:02:49Z"
Entry not found
LinxuanPastel/VamosChicosTITAN
LinxuanPastel
"2024-06-22T18:33:19Z"
0
0
null
[ "region:us" ]
null
"2024-06-22T18:09:31Z"
Entry not found
Fischerboot/ll3-sophie-new-8ep
Fischerboot
"2024-06-22T19:22:10Z"
0
0
peft
[ "peft", "llama", "generated_from_trainer", "base_model:Fischerboot/LLama3-Lexi-Aura-3Some-SLERP-SLERP-ql-merge", "4-bit", "bitsandbytes", "region:us" ]
null
"2024-06-22T18:10:32Z"
--- base_model: Fischerboot/LLama3-Lexi-Aura-3Some-SLERP-SLERP-ql-merge library_name: peft tags: - generated_from_trainer model-index: - name: outputs/newdataset-out results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml base_model: Fischerboot/LLama3-Lexi-Aura-3Some-SLERP-SLERP-ql-merge model_type: LlamaForCausalLM tokenizer_type: AutoTokenizer load_in_8bit: false load_in_4bit: true strict: false chat_template: llama3 datasets: - path: Fischerboot/newnewdataset-sophie type: sharegpt conversation: llama3 dataset_prepared_path: last_run_prepared val_set_size: 0.1 output_dir: ./outputs/newdataset-out adapter: qlora lora_model_dir: sequence_len: 128 sample_packing: false pad_to_sequence_len: true lora_r: 1024 lora_alpha: 512 lora_dropout: 0.05 lora_target_linear: true lora_fan_in_fan_out: lora_target_modules: - gate_proj - down_proj - up_proj - q_proj - v_proj - k_proj - o_proj wandb_project: wandb_entity: wandb_watch: wandb_name: wandb_log_model: gradient_accumulation_steps: 1 micro_batch_size: 1 num_epochs: 8 optimizer: adamw_bnb_8bit lr_scheduler: cosine learning_rate: 0.0002 train_on_inputs: false group_by_length: false bf16: auto fp16: tf32: false gradient_checkpointing: true early_stopping_patience: resume_from_checkpoint: local_rank: logging_steps: 1 xformers_attention: flash_attention: true loss_watchdog_threshold: 5.0 loss_watchdog_patience: 3 eval_sample_packing: false warmup_steps: 10 evals_per_epoch: 4 eval_table_size: eval_max_new_tokens: 128 saves_per_epoch: 1 debug: deepspeed: weight_decay: 0.0 fsdp: fsdp_config: special_tokens: bos_token: "<|begin_of_text|>" eos_token: "<|end_of_text|>" pad_token: "<|end_of_text|>" ``` </details><br> # outputs/newdataset-out This model is a fine-tuned version of [Fischerboot/LLama3-Lexi-Aura-3Some-SLERP-SLERP-ql-merge](https://huggingface.co/Fischerboot/LLama3-Lexi-Aura-3Some-SLERP-SLERP-ql-merge) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2792 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - num_epochs: 8 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 6.3499 | 0.0034 | 1 | 6.0611 | | 1.4549 | 0.2526 | 74 | 1.8669 | | 0.4942 | 0.5051 | 148 | 0.5161 | | 0.5932 | 0.7577 | 222 | 1.2850 | | 0.8581 | 1.0102 | 296 | 0.7266 | | 1.1222 | 1.2628 | 370 | 0.3729 | | 0.4354 | 1.5154 | 444 | 0.4699 | | 0.6122 | 1.7679 | 518 | 0.6806 | | 0.7419 | 2.0205 | 592 | 0.8912 | | 2.7271 | 2.2730 | 666 | 1.2924 | | 0.93 | 2.5256 | 740 | 0.8516 | | 0.7029 | 2.7782 | 814 | 0.5884 | | 0.5606 | 3.0307 | 888 | 0.5291 | | 0.4365 | 3.2833 | 962 | 0.8004 | | 0.2466 | 3.5358 | 1036 | 0.3922 | | 0.6039 | 3.7884 | 1110 | 0.3917 | | 0.1796 | 4.0410 | 1184 | 0.3216 | | 0.3061 | 4.2935 | 1258 | 0.4309 | | 0.7083 | 4.5461 | 1332 | 0.4010 | | 0.3891 | 4.7986 | 1406 | 0.3268 | | 0.331 | 5.0512 | 1480 | 0.3360 | | 0.3014 | 5.3038 | 1554 | 0.2963 | | 0.125 | 5.5563 | 1628 | 0.3096 | | 0.3207 | 5.8089 | 1702 | 0.3020 | | 0.2809 | 6.0614 | 1776 | 0.2849 | | 1.5804 | 6.3140 | 1850 | 0.2801 | | 0.4681 | 6.5666 | 1924 | 0.2826 | | 0.2527 | 6.8191 | 1998 | 0.2793 | | 0.2207 | 7.0717 | 2072 | 0.2787 | | 0.2498 | 7.3242 | 2146 | 0.2799 | | 0.1927 | 7.5768 | 2220 | 0.2798 | | 0.415 | 7.8294 | 2294 | 0.2792 | ### Framework versions - PEFT 0.11.1 - Transformers 4.41.1 - Pytorch 2.1.2+cu118 - Datasets 2.19.1 - Tokenizers 0.19.1
axssel/austin_reave
axssel
"2024-06-22T18:12:38Z"
0
0
null
[ "region:us" ]
null
"2024-06-22T18:12:38Z"
Entry not found
Xie/sdxl-blocks
Xie
"2024-06-30T08:01:30Z"
0
0
null
[ "region:us" ]
null
"2024-06-22T18:14:58Z"
Entry not found
Niharrrrrr/pierre_gasly
Niharrrrrr
"2024-06-23T17:03:59Z"
0
0
null
[ "region:us" ]
null
"2024-06-22T18:15:29Z"
Entry not found
JimmyTheBarkeep/SwipeLeft
JimmyTheBarkeep
"2024-06-22T18:16:06Z"
0
0
null
[ "license:apache-2.0", "region:us" ]
null
"2024-06-22T18:16:06Z"
--- license: apache-2.0 ---
jriewerts/helloWorld
jriewerts
"2024-06-22T18:16:39Z"
0
0
null
[ "region:us" ]
null
"2024-06-22T18:16:39Z"
Entry not found
itay-nakash/model_e4ad58a464_sweep_cerulean-dust-796
itay-nakash
"2024-06-22T18:18:25Z"
0
0
null
[ "region:us" ]
null
"2024-06-22T18:18:25Z"
Entry not found
aflah/HF_DEPLOYMENT_TESTING_llama-3-8b-bnb-4bit
aflah
"2024-06-22T18:24:43Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
"2024-06-22T18:18:33Z"
--- base_model: unsloth/llama-3-8b-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl --- # Uploaded model - **Developed by:** aflah - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
jriewerts/helloWeb
jriewerts
"2024-06-22T18:19:12Z"
0
0
null
[ "license:apache-2.0", "region:us" ]
null
"2024-06-22T18:19:12Z"
--- license: apache-2.0 ---
itay-nakash/model_e4ad58a464_sweep_valiant-silence-797
itay-nakash
"2024-06-22T18:19:53Z"
0
0
null
[ "region:us" ]
null
"2024-06-22T18:19:53Z"
Entry not found
valerielucro/mistral_gsm8k_dpo_cot_r64_epoch3
valerielucro
"2024-06-22T18:20:37Z"
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-06-22T18:20:14Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
tykiww/llama3-8b-bnb-4bit-lora
tykiww
"2024-07-01T01:51:08Z"
0
1
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-22T18:22:14Z"
--- base_model: unsloth/llama-3-8b-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl --- # Uploaded model - **Developed by:** tykiww - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) --------------------------------------------- # Setting up and testing own Endpoint Handler Sources: - https://www.philschmid.de/custom-inference-handler - https://discuss.huggingface.co/t/model-wont-load-on-custom-inference-endpoint/91780 - https://huggingface.co/docs/inference-endpoints/guides/custom_handler ### Setup Environment Install necessary packages to set up and test endpoint handler. ``` # install git-lfs to interact with the repository sudo apt-get update sudo apt-get install git-lfs # install transformers (not needed for inference since it is installed by default in the container) pip install transformers[sklearn,sentencepiece,audio,vision] ``` Clone model weights of interest. ``` git lfs install git clone https://huggingface.co/tykiww/llama3-8b-bnb-4bit-lora ``` Login to huggingface ``` # setup cli with token huggingface-cli login git config --global credential.helper store ``` Confirm login in case you are unsure. ``` huggingface-cli whoami ``` Navigate to repo and create a handler.py file ``` cd llama3-8b-bnb-4bit-lora #&& touch handler.py ``` Create a requirements.txt file with the following items ``` huggingface_hub unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git xformers trl<0.9.0 peft==0.11.1 bitsandbytes transformers==4.41.2 # must use /: ``` Must have a GPU compatible with Unsloth.
AriaRahmati1/222ghesmat9part2
AriaRahmati1
"2024-06-22T18:45:30Z"
0
0
null
[ "license:openrail", "region:us" ]
null
"2024-06-22T18:22:22Z"
--- license: openrail ---
junannn/llama3-8b-cosmic-fusion-dynamics-lora
junannn
"2024-06-22T18:26:07Z"
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-22T18:25:58Z"
--- base_model: unsloth/llama-3-8b-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl --- # Uploaded model - **Developed by:** junannn - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
hugosousa/phi3_fft
hugosousa
"2024-07-02T22:32:34Z"
0
0
transformers
[ "transformers", "phi3", "text-generation", "custom_code", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2024-06-22T18:26:23Z"
Entry not found
pdudka/llama38binstruct_summarize
pdudka
"2024-06-22T18:27:38Z"
0
0
peft
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "dataset:generator", "base_model:NousResearch/Meta-Llama-3-8B-Instruct", "license:other", "region:us" ]
null
"2024-06-22T18:27:19Z"
--- base_model: NousResearch/Meta-Llama-3-8B-Instruct datasets: - generator library_name: peft license: other tags: - trl - sft - generated_from_trainer model-index: - name: llama38binstruct_summarize results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # llama38binstruct_summarize This model is a fine-tuned version of [NousResearch/Meta-Llama-3-8B-Instruct](https://huggingface.co/NousResearch/Meta-Llama-3-8B-Instruct) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 2.1323 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_steps: 0.03 - training_steps: 100 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.3057 | 1.25 | 25 | 2.1323 | | 2.241 | 2.5 | 50 | 2.1323 | | 2.3289 | 3.75 | 75 | 2.1323 | | 2.3337 | 5.0 | 100 | 2.1323 | ### Framework versions - PEFT 0.11.1 - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
kobrasoft/kobraspeech-rnn-cs
kobrasoft
"2024-06-24T22:34:09Z"
0
0
tensorflow
[ "tensorflow", "tensorboard", "keras", "automatic-speech-recognition", "speech", "Tensorflow", "Keras", "RNN", "cs", "dataset:mozilla-foundation/common_voice_17_0", "license:cc-by-nc-sa-4.0", "model-index", "region:us" ]
automatic-speech-recognition
"2024-06-22T18:29:47Z"
--- datasets: - mozilla-foundation/common_voice_17_0 language: cs library_name: tensorflow license: cc-by-nc-sa-4.0 metrics: - wer - val_loss pipeline_tag: automatic-speech-recognition tags: - automatic-speech-recognition - speech - Tensorflow - Keras - RNN model-index: - name: KobraSpeech RNN Czech results: - task: type: speech-to-text dataset: name: mozilla-foundation/common_voice_17_0 type: common_voice split: test metrics: - type: wer value: '0.6982' --- # KobraSpeech RNN Czech This is a lightweight speech-to-text model for Czech language. It was trained on the Common Voice dataset. ## Training progress | Epoch | Loss | Val Loss | | --- | --- | --- | | 1 | 145.0826 | 101.9806 | | 2 | 88.5889 | 80.9404 | | 3 | 71.0080 | 72.7689 | | 4 | 61.9973 | 68.7629 | | 5 | 56.7657 | 60.8039 | | 6 | 51.5836 | 56.6200 | | 7 | 47.6242 | 58.4478 | | 8 | 44.3805 | 54.3059 | | 9 | 41.5582 | 49.7450 | | 10 | 39.1244 | 51.0741 | | 11 | 36.9500 | 46.6725 | | 12 | 35.0127 | 45.6165 | | 13 | 33.2974 | 47.7714 | | 14 | 31.6605 | 45.0911 | | 15 | 30.0918 | 43.3004 | | 16 | 28.8173 | 42.9870 | | 17 | 27.5169 | 42.2732 | | 18 | 26.3582 | 42.9355 | | 19 | 25.2368 | 42.0441 | | 20 | 24.2527 | 41.2783 | | 21 | 23.3302 | 40.5552 | | 22 | 22.3662 | 42.3867 | | 23 | 21.5657 | 41.0113 | | 24 | 20.7213 | 42.3488 | | 25 | 19.9843 | 41.7464 | | 26 | 22.3809 | 40.7493 | | 27 | 21.5943 | 40.4331 | | 28 | 20.6919 | 41.5385 | | 29 | 19.9768 | 41.5923 | | 30 | 19.2961 | 39.0283 | | 31 | 18.6037 | 40.4818 | | 32 | 17.9178 | 40.1532 | | 33 | 17.3384 | 40.9723 | | 34 | 16.7528 | 39.4724 | This model was created and trained by [Kobrasoft](https://kobrasoft.cz)
allyson-ai/website-object-detection-yolov10
allyson-ai
"2024-06-24T19:42:29Z"
0
0
null
[ "object-detection", "en", "license:apache-2.0", "region:us" ]
object-detection
"2024-06-22T18:30:19Z"
--- license: apache-2.0 language: - en pipeline_tag: object-detection ---
dbostain/example-model
dbostain
"2024-06-22T18:32:27Z"
0
0
null
[ "license:mit", "region:us" ]
null
"2024-06-22T18:31:38Z"
--- license: mit --- First HF model
1112luke/bartolini
1112luke
"2024-06-22T18:35:52Z"
0
0
null
[ "region:us" ]
null
"2024-06-22T18:35:52Z"
Entry not found
Xeltosh/SonicHasAutism
Xeltosh
"2024-06-22T19:17:33Z"
0
0
null
[ "region:us" ]
null
"2024-06-22T18:40:41Z"
Entry not found
Polenov2024/Wendy_Pony_lora
Polenov2024
"2024-06-22T18:43:57Z"
0
0
null
[ "region:us" ]
null
"2024-06-22T18:43:18Z"
Entry not found
Polenov2024/Mabel_Pony_lora
Polenov2024
"2024-06-22T18:44:47Z"
0
0
null
[ "region:us" ]
null
"2024-06-22T18:44:06Z"
Entry not found
diepala/ppo-LunarLander-v2-unit8
diepala
"2024-06-22T18:48:53Z"
0
0
null
[ "tensorboard", "LunarLander-v2", "ppo", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "deep-rl-course", "model-index", "region:us" ]
reinforcement-learning
"2024-06-22T18:48:37Z"
--- tags: - LunarLander-v2 - ppo - deep-reinforcement-learning - reinforcement-learning - custom-implementation - deep-rl-course model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: -177.76 +/- 78.44 name: mean_reward verified: false --- # PPO Agent Playing LunarLander-v2 This is a trained model of a PPO agent playing LunarLander-v2. # Hyperparameters ```python {'exp_name': 'ppo' 'seed': 1 'torch_deterministic': True 'cuda': True 'track': False 'wandb_project_name': 'cleanRL' 'wandb_entity': None 'capture_video': False 'env_id': 'LunarLander-v2' 'total_timesteps': 50000 'learning_rate': 0.00025 'num_envs': 4 'num_steps': 128 'anneal_lr': True 'gae': True 'gamma': 0.99 'gae_lambda': 0.95 'num_minibatches': 4 'update_epochs': 4 'norm_adv': True 'clip_coef': 0.2 'clip_vloss': True 'ent_coef': 0.01 'vf_coef': 0.5 'max_grad_norm': 0.5 'target_kl': None 'repo_id': 'diepala/ppo-LunarLander-v2-unit8' 'batch_size': 512 'minibatch_size': 128} ```
sccengizlrn/donut-sciencedirect-header-parser-raw-5-epoch
sccengizlrn
"2024-06-22T18:50:05Z"
0
0
null
[ "region:us" ]
null
"2024-06-22T18:50:05Z"
Entry not found
RamtinMoslemi/rl_course_vizdoom_health_gathering_supreme
RamtinMoslemi
"2024-06-22T18:52:43Z"
0
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
"2024-06-22T18:52:34Z"
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: doom_health_gathering_supreme type: doom_health_gathering_supreme metrics: - type: mean_reward value: 11.49 +/- 4.88 name: mean_reward verified: false --- A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment. This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory. Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/ ## Downloading the model After installing Sample-Factory, download the model with: ``` python -m sample_factory.huggingface.load_from_hub -r RamtinMoslemi/rl_course_vizdoom_health_gathering_supreme ``` ## Using the model To run the model after download, use the `enjoy` script corresponding to this environment: ``` python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme ``` You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag. See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details ## Training with this model To continue training with this model, use the `train` script corresponding to this environment: ``` python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000 ``` Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
stafdif/Oblivion
stafdif
"2024-06-22T18:53:34Z"
0
0
null
[ "region:us" ]
null
"2024-06-22T18:53:00Z"
Entry not found
ToeBoe/instasamka
ToeBoe
"2024-06-22T18:53:41Z"
0
0
null
[ "region:us" ]
null
"2024-06-22T18:53:02Z"
Entry not found
Razer112/StarWarsTheory
Razer112
"2024-06-22T18:54:32Z"
0
0
null
[ "license:openrail", "region:us" ]
null
"2024-06-22T18:54:24Z"
--- license: openrail ---
Dexter7/Dexter
Dexter7
"2024-06-22T18:56:31Z"
0
0
null
[ "region:us" ]
null
"2024-06-22T18:56:31Z"
Entry not found
AriaRahmati1/222ghesmat9part3
AriaRahmati1
"2024-06-22T19:08:44Z"
0
0
null
[ "license:openrail", "region:us" ]
null
"2024-06-22T18:56:31Z"
--- license: openrail ---
alexandrehsd/xlm-roberta-base-finetuned-panx-de
alexandrehsd
"2024-06-22T19:00:30Z"
0
0
null
[ "region:us" ]
null
"2024-06-22T19:00:30Z"
Entry not found
Dandandooo/user-sim__gemma-1.1-2b-it__0_no_move
Dandandooo
"2024-06-22T19:04:44Z"
0
0
null
[ "region:us" ]
null
"2024-06-22T19:04:44Z"
Entry not found
Mahmoud3899/xlm-roberta-base-finetuned-panx-all
Mahmoud3899
"2024-06-22T19:24:18Z"
0
0
transformers
[ "transformers", "safetensors", "xlm-roberta", "token-classification", "generated_from_trainer", "base_model:xlm-roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
"2024-06-22T19:09:13Z"
--- license: mit base_model: xlm-roberta-base tags: - generated_from_trainer metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-all results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-all This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1758 - F1: 0.8558 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.299 | 1.0 | 835 | 0.2074 | 0.8078 | | 0.1587 | 2.0 | 1670 | 0.1705 | 0.8461 | | 0.1012 | 3.0 | 2505 | 0.1758 | 0.8558 | ### Framework versions - Transformers 4.42.0.dev0 - Pytorch 2.3.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
cminja/whisper-tiny-sr-hr-combined-8500
cminja
"2024-06-22T19:11:58Z"
0
0
null
[ "region:us" ]
null
"2024-06-22T19:11:58Z"
Entry not found
LarryAIDraw/waiANINSFWPONYXL_v50
LarryAIDraw
"2024-06-22T19:30:11Z"
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
"2024-06-22T19:12:50Z"
--- license: creativeml-openrail-m --- https://civitai.com/models/404154/wai-ani-nsfw-ponyxl
ToeBoe/BoginyaVsegoWorld
ToeBoe
"2024-06-22T19:14:51Z"
0
0
null
[ "region:us" ]
null
"2024-06-22T19:13:32Z"
Entry not found
AriaRahmati1/222ghesmat9part4
AriaRahmati1
"2024-06-22T19:24:45Z"
0
0
null
[ "license:openrail", "region:us" ]
null
"2024-06-22T19:15:34Z"
--- license: openrail ---
ToeBoe/Shuhuan
ToeBoe
"2024-06-22T19:17:27Z"
0
0
null
[ "region:us" ]
null
"2024-06-22T19:16:18Z"
Entry not found
mahmoud669/face-celebs
mahmoud669
"2024-06-22T20:02:56Z"
0
0
null
[ "region:us" ]
null
"2024-06-22T19:16:35Z"
Entry not found
Wouter01/mT5Ranking
Wouter01
"2024-06-25T07:09:43Z"
0
0
null
[ "region:us" ]
null
"2024-06-22T19:16:42Z"
Entry not found
rishikaboinapally/AmazonLens
rishikaboinapally
"2024-06-22T19:20:51Z"
0
0
null
[ "region:us" ]
null
"2024-06-22T19:20:51Z"
Entry not found
pookie3000/pg_chat_lora_v1
pookie3000
"2024-06-22T19:24:53Z"
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:pookie3000/llama-3-8b-bnb-4bit-for-chat-training", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-22T19:22:07Z"
--- base_model: pookie3000/llama-3-8b-bnb-4bit-for-chat-training language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl --- # Uploaded model - **Developed by:** pookie3000 - **License:** apache-2.0 - **Finetuned from model :** pookie3000/llama-3-8b-bnb-4bit-for-chat-training This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Polenov2024/Diane_Foxington_Pony_lora
Polenov2024
"2024-06-22T19:26:58Z"
0
0
null
[ "region:us" ]
null
"2024-06-22T19:24:34Z"
Entry not found
Rohit115/gpt5
Rohit115
"2024-06-22T19:26:04Z"
0
0
null
[ "region:us" ]
null
"2024-06-22T19:26:04Z"
Entry not found
zmiyao/jamal
zmiyao
"2024-06-22T19:26:16Z"
0
0
null
[ "license:unknown", "region:us" ]
null
"2024-06-22T19:26:16Z"
--- license: unknown ---
MohammadDallash/trajectory_smoother
MohammadDallash
"2024-06-22T19:30:25Z"
0
0
null
[ "region:us" ]
null
"2024-06-22T19:26:31Z"
Entry not found
itay-nakash/model_e4ad58a464_sweep_valiant-lake-798
itay-nakash
"2024-06-22T19:30:37Z"
0
0
null
[ "region:us" ]
null
"2024-06-22T19:30:37Z"
Entry not found
ibrahdiallo077/demo-onnx
ibrahdiallo077
"2024-06-22T19:30:39Z"
0
0
null
[ "region:us" ]
null
"2024-06-22T19:30:39Z"
Entry not found