modelId
stringlengths
5
122
author
stringlengths
2
42
last_modified
unknown
downloads
int64
0
738M
likes
int64
0
11k
library_name
stringclasses
245 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
48 values
createdAt
unknown
card
stringlengths
1
901k
ShiftAddLLM/opt66b-3bit-lat
ShiftAddLLM
"2024-06-14T04:24:27Z"
0
0
null
[ "region:us" ]
null
"2024-06-14T04:19:14Z"
Entry not found
EJosnin/trained-sd3
EJosnin
"2024-06-14T04:21:19Z"
0
0
null
[ "region:us" ]
null
"2024-06-14T04:21:19Z"
Entry not found
ShiftAddLLM/opt66b-2bit-lat
ShiftAddLLM
"2024-06-14T04:28:23Z"
0
0
null
[ "region:us" ]
null
"2024-06-14T04:24:44Z"
Entry not found
jianrong123/Qwen2-1.5B-Instruct-q4f16_1-MLC
jianrong123
"2024-06-14T04:26:24Z"
0
0
null
[ "region:us" ]
null
"2024-06-14T04:26:24Z"
Entry not found
arashm/CNN_protein-classification
arashm
"2024-06-14T04:43:00Z"
0
0
null
[ "en", "region:us" ]
null
"2024-06-14T04:30:17Z"
--- language: - en metrics: - accuracy - f1 - recall - precision ---
sonicc/my_awesome_eli5_mlm_model
sonicc
"2024-06-17T23:31:57Z"
0
0
transformers
[ "transformers", "safetensors", "roberta", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
"2024-06-14T04:31:53Z"
Entry not found
ShiftAddLLM/opt6.7b-2bit-lat
ShiftAddLLM
"2024-06-14T04:34:12Z"
0
0
null
[ "region:us" ]
null
"2024-06-14T04:33:38Z"
Entry not found
ShiftAddLLM/opt6.7b-3bit-lat
ShiftAddLLM
"2024-06-14T04:35:43Z"
0
0
null
[ "region:us" ]
null
"2024-06-14T04:35:01Z"
Entry not found
Nutanix/Meta-Llama-3-8B-Instruct_KTO_lora_UltraFeedback-preference-standard-processed
Nutanix
"2024-06-14T11:56:37Z"
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-06-14T04:35:36Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
chainup244/Qwen-Qwen1.5-1.8B-1718339754
chainup244
"2024-06-14T04:35:55Z"
0
0
null
[ "region:us" ]
null
"2024-06-14T04:35:55Z"
Entry not found
kevinchen123/Qwen-Qwen1.5-0.5B-1718340200
kevinchen123
"2024-06-14T04:43:26Z"
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:Qwen/Qwen1.5-0.5B", "region:us" ]
null
"2024-06-14T04:43:22Z"
--- library_name: peft base_model: Qwen/Qwen1.5-0.5B --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.11.1
tctrautman/20240613-kibbe-training-base-merged-full
tctrautman
"2024-06-14T04:46:51Z"
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-06-14T04:46:47Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Amadeus99/topic-classification-tweet-6
Amadeus99
"2024-06-14T04:47:29Z"
0
0
null
[ "region:us" ]
null
"2024-06-14T04:47:29Z"
Entry not found
triplee/test_llama3-70b_lora_model
triplee
"2024-06-14T04:48:33Z"
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-70b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-14T04:47:38Z"
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl base_model: unsloth/llama-3-70b-bnb-4bit --- # Uploaded model - **Developed by:** triplee - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-70b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
SatyamSSJ10/jufsd
SatyamSSJ10
"2024-06-14T04:48:55Z"
0
0
null
[ "region:us" ]
null
"2024-06-14T04:48:19Z"
Entry not found
morganjiaming/model_name
morganjiaming
"2024-06-14T04:50:43Z"
0
0
null
[ "region:us" ]
null
"2024-06-14T04:50:43Z"
Entry not found
Waleed-MS/LLama_lora_model
Waleed-MS
"2024-06-14T04:53:10Z"
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-14T04:52:55Z"
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl base_model: unsloth/llama-3-8b-bnb-4bit --- # Uploaded model - **Developed by:** Waleed-MS - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
sujankumar15/Llama-2-7b-chat-finetune
sujankumar15
"2024-06-14T05:02:41Z"
0
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-06-14T04:55:07Z"
Entry not found
ACEGameAI/Jeffrey-Savonen_ohwx-man_NEW
ACEGameAI
"2024-06-14T05:36:52Z"
0
0
null
[ "region:us" ]
null
"2024-06-14T04:57:00Z"
Entry not found
Bikram2055/llama-2-7b-edtech
Bikram2055
"2024-06-14T04:57:02Z"
0
0
null
[ "region:us" ]
null
"2024-06-14T04:57:02Z"
Entry not found
TTTXXX01/All_like48-zephyr-7b-sft-full
TTTXXX01
"2024-06-14T09:53:32Z"
0
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "alignment-handbook", "trl", "dpo", "generated_from_trainer", "conversational", "dataset:HuggingFaceH4/ultrafeedback_binarized", "base_model:alignment-handbook/zephyr-7b-sft-full", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-06-14T04:58:02Z"
--- license: apache-2.0 base_model: alignment-handbook/zephyr-7b-sft-full tags: - alignment-handbook - trl - dpo - generated_from_trainer - trl - dpo - generated_from_trainer datasets: - HuggingFaceH4/ultrafeedback_binarized model-index: - name: All_like48-zephyr-7b-sft-full results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # All_like48-zephyr-7b-sft-full This model is a fine-tuned version of [alignment-handbook/zephyr-7b-sft-full](https://huggingface.co/alignment-handbook/zephyr-7b-sft-full) on the HuggingFaceH4/ultrafeedback_binarized dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - total_eval_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
shuisman/xlm-roberta-base-dutch-cola
shuisman
"2024-06-14T18:18:45Z"
0
0
null
[ "region:us" ]
null
"2024-06-14T04:58:44Z"
# xlm-roberta-base-dutch-cola This model is a fine-tuned version of [XLM-RoBERTa-base](https://huggingface.co/FacebookAI/xlm-roberta-base) on [Dutch CoLA](https://huggingface.co/datasets/GroNLP/dutch-cola). It achieves the following results on the evaluation set: - Loss: 0.6558 - Accuracy: 0.6708 - MCC: 0.3523 ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 6 - EarlyStopping with patience 2
ib-eugeneroh/gib_demo
ib-eugeneroh
"2024-06-14T05:02:26Z"
0
0
null
[ "region:us" ]
null
"2024-06-14T05:00:58Z"
Entry not found
nrohan988/AI-Styler
nrohan988
"2024-06-14T05:05:52Z"
0
0
null
[ "region:us" ]
null
"2024-06-14T05:02:58Z"
Entry not found
belyakoff/DistilBERT-token-classification
belyakoff
"2024-06-14T05:04:15Z"
0
0
null
[ "region:us" ]
null
"2024-06-14T05:04:15Z"
Entry not found
NotoriousH2/240614
NotoriousH2
"2024-06-14T05:05:16Z"
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-06-14T05:04:42Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
solgit/240614
solgit
"2024-06-14T05:27:05Z"
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-06-14T05:05:29Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Abinj650/Audio
Abinj650
"2024-06-14T05:07:19Z"
0
0
null
[ "region:us" ]
null
"2024-06-14T05:07:19Z"
Entry not found
NotoriousH2/240614_v2
NotoriousH2
"2024-06-14T05:08:02Z"
0
0
null
[ "region:us" ]
null
"2024-06-14T05:08:02Z"
Entry not found
GraydientPlatformAPI/loras-june14
GraydientPlatformAPI
"2024-06-14T05:39:38Z"
0
0
null
[ "region:us" ]
null
"2024-06-14T05:08:49Z"
Entry not found
Cristian9481/xgboost-pipeline-model
Cristian9481
"2024-06-14T06:20:03Z"
0
0
null
[ "region:us" ]
null
"2024-06-14T05:09:14Z"
Entry not found
nasser1/ggjgjg
nasser1
"2024-06-14T05:10:48Z"
0
0
null
[ "region:us" ]
null
"2024-06-14T05:10:48Z"
Entry not found
yyqoni/debug_outputs
yyqoni
"2024-06-14T05:11:36Z"
0
0
null
[ "region:us" ]
null
"2024-06-14T05:11:36Z"
Entry not found
EmilioMalagon-TEC/finetuning-sentiment-model-amazon-baby-5000
EmilioMalagon-TEC
"2024-06-14T05:12:41Z"
0
0
null
[ "region:us" ]
null
"2024-06-14T05:12:41Z"
Entry not found
Alectoris/grrg
Alectoris
"2024-06-14T05:13:42Z"
0
0
null
[ "region:us" ]
null
"2024-06-14T05:13:41Z"
Entry not found
IrohXu/trained-sd3
IrohXu
"2024-06-14T05:15:54Z"
0
0
null
[ "region:us" ]
null
"2024-06-14T05:15:54Z"
Entry not found
manbeast3b/ZZZZZZZZZZZtest17
manbeast3b
"2024-06-14T05:19:31Z"
0
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2024-06-14T05:17:15Z"
Entry not found
ldhldh/lll
ldhldh
"2024-06-14T05:21:21Z"
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-06-14T05:20:25Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Lennard-Heuer/Llama3-FT_V1
Lennard-Heuer
"2024-06-14T05:30:27Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-06-14T05:21:04Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
duttasantanu/opt-125m-quantized-dlai
duttasantanu
"2024-06-14T05:23:32Z"
0
0
null
[ "region:us" ]
null
"2024-06-14T05:22:41Z"
Entry not found
Rxplore/test_bug
Rxplore
"2024-06-14T05:23:49Z"
0
0
null
[ "region:us" ]
null
"2024-06-14T05:23:49Z"
Entry not found
aiqink/agenthub
aiqink
"2024-06-14T05:27:00Z"
0
1
null
[ "license:apache-2.0", "region:us" ]
null
"2024-06-14T05:27:00Z"
--- license: apache-2.0 ---
solgit/240614_w_basemodel
solgit
"2024-06-14T05:33:55Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "trl", "sft", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "bitsandbytes", "region:us" ]
text-generation
"2024-06-14T05:30:12Z"
--- library_name: transformers tags: - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
kevinchen123/Qwen-Qwen1.5-1.8B-1718343061
kevinchen123
"2024-06-14T05:31:08Z"
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:Qwen/Qwen1.5-1.8B", "region:us" ]
null
"2024-06-14T05:31:02Z"
--- library_name: peft base_model: Qwen/Qwen1.5-1.8B --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.11.1
Hhhhtt/flutter
Hhhhtt
"2024-06-14T05:37:09Z"
0
0
null
[ "region:us" ]
null
"2024-06-14T05:34:52Z"
Entry not found
holma91/honeycomb
holma91
"2024-06-14T07:52:30Z"
0
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "bitsandbytes", "region:us" ]
text-generation
"2024-06-14T05:36:54Z"
Entry not found
ncabrera97/KmPssble
ncabrera97
"2024-06-14T05:38:25Z"
0
0
null
[ "region:us" ]
null
"2024-06-14T05:37:14Z"
Entry not found
ShapeKapseln33/Levitox2W
ShapeKapseln33
"2024-06-14T05:39:21Z"
0
0
null
[ "region:us" ]
null
"2024-06-14T05:38:11Z"
Levitox Reviews Benefits & Intake Levitox is a dietary supplement that is marketed primarily for supporting liver health and aiding in weight management. It is formulated with a blend of natural ingredients intended to promote liver function and assist in metabolic processes. The supplement claims to offer benefits in detoxifying the body and possibly influencing weight control. As with any dietary supplement, it's important to consult with a healthcare professional for advice on its suitability and effectiveness for individual health needs and conditions. For detailed information,Make sure to read the whole article and it's best to refer to the product's official website. **[Click here to buy now from official website of Levitox](https://capsules24x7.com/levitox-us)** Welcome to the ultimate guide on Levitox - the groundbreaking supplement that's taking the health and wellness world by storm! If you're curious about how Levitox Reviews can revolutionize your well-being, you've come to the right place. In this blog post, we'll delve into its working mechanism, key ingredients, benefits, pros and cons, as well as real consumer reviews. Get ready to discover a whole new way of supporting your health with Levitox! Curious about how Levitox works its magic in the body? This innovative supplement targets stubborn fat cells, helping to boost metabolism and support healthy weight management. By tackling inflammation and oxidative stress, Levitox promotes overall wellness from within. Key ingredients like turmeric, ginger, and black pepper extract work synergistically to provide powerful antioxidant properties while supporting digestive health. With a blend of natural ingredients UroFresh Reviews carefully selected for their efficacy, Levitox offers a holistic approach to improving your health. Discover the numerous benefits of incorporating Levitox into your daily routine – from increased energy levels and enhanced cognitive function to better digestion and improved immunity. However, like any product, it's essential to weigh the pros and cons before deciding if Levitox is right for you. ##Introduction to Levitox Are you on a quest for a natural way to support your weight management goals? Look no further than Levitox. This cutting-edge supplement is designed to help you achieve a healthier lifestyle by targeting key areas of your body that can impact weight loss. Levitox contains a powerful blend of ingredients carefully selected to work synergistically in supporting metabolism, curbing cravings, and promoting energy levels. With its unique formula, Levitox aims to provide comprehensive support for individuals looking to enhance their Cholibrium Reviews weight management journey without the use of harsh chemicals or stimulants. Whether you're just starting your wellness journey or looking for an effective addition to your current routine, Levitox may be the solution you've been searching for. ##Explore how Levitox works in the body Levitox is a natural dietary supplement designed to support healthy weight management and overall well-being. When you take Levitox, its powerful blend of ingredients works synergistically to target stubborn fat cells in the body. The formula helps boost metabolism, allowing your body to burn fat more efficiently while providing a steady stream of energy throughout the day. **[Click here to buy now from official website of Levitox](https://capsules24x7.com/levitox-us)** One key aspect of how Levitox works is by promoting thermogenesis, which is the process of generating heat within the body to help burn calories. This means that by taking Levitox regularly, you BellySlim-XT Reviews may experience increased calorie expenditure, leading to potential weight loss results over time. Additionally, Levitox contains antioxidants that can help reduce inflammation and support better digestion for improved nutrient absorption. Understanding how Levitox functions in the body sheds light on its potential benefits for those looking to enhance their weight loss journey naturally and effectively. ##Detailed explanation of the working process and effects Levitox works by targeting the root cause of weight gain – inflammation. The powerful blend of natural ingredients in Levitox helps to reduce inflammation in the body, especially in fat cells. By doing so, it supports the body's ability to metabolize fats more efficiently and promotes overall weight loss. Additionally, Levitox contains ingredients that can help regulate blood sugar levels and curb cravings for unhealthy foods. This dual-action approach not only aids in shedding excess pounds but also promotes XanoBurn Reviews better control over eating habits. Users often report feeling a boost in energy and improved mood due to the positive effects of Levitox on their metabolism and overall well-being. As inflammation decreases and metabolic processes improve, users may experience enhanced digestion, reduced bloating, and increased feelings of satiety after meals. These combined effects contribute to a holistic approach to weight management with Levitox. ##Key Ingredients in Levitox Levitox is packed with powerful natural ingredients that work synergistically to support overall health and well-being. One key ingredient found in Levitox is African Mango Extract, known for its ability to promote weight loss by boosting metabolism and reducing appetite. Another essential component is Grape Seed Extract, rich in antioxidants that help protect cells from damage caused by free radicals. Additionally, Levitox contains Green Tea Extract, which has been shown to increase fat burning and improve physical performance. These carefully selected ingredients are formulated to provide Consti-Slim Reviews maximum benefits for those looking to enhance their weight management journey. By incorporating these potent elements into your daily routine, Levitox aims to support a healthy lifestyle and aid in achieving your wellness goals. **[Click here to buy now from official website of Levitox](https://capsules24x7.com/levitox-us)**
VKapseln475/Flexafe
VKapseln475
"2024-06-14T05:39:36Z"
0
0
null
[ "region:us" ]
null
"2024-06-14T05:38:53Z"
# Flexafen Reviews United States Experiences – Flexafen Intake, Ingredients Official Price, Buy Flexafen Reviews United States Experiences Flexafen is a revolutionary new-found formulation that supports joint health. Flexafen is a powerful mix of natural ingredients that can support occasional joint discomfort and cartilage health naturally. ## **[Click Here To Buy Now From Official Website of Flexafen](https://capsules24x7.com/flexafen-us)** ## For a thorough analysis of the ingredients, be sure to read the personalized review on Flexafen. ### Potential Side Effects Of Flexafen Flexafen supports joint health and offers temporary relief for discomfort and aches. Users need to know about potential side effects. While the natural ingredients are usually well-tolerated, reactions can vary based on individual sensitivities and health conditions. ### MSM (Methylsulfonylmethane) MSM is generally considered safe, but some users may experience mild gastrointestinal upset, allergic reactions, or skin rashes. It's advisable for individuals with a known allergy to sulfur to proceed with caution. ### Hyaluronic Acid (HA) HA is also well-tolerated, though some people might encounter joint pain and swelling, particularly when taken in high doses. It's rare, but worth noting for individuals prone to allergic reactions. ### AprèsFlex Boswellia Serrata Extract While Boswellia is praised for its anti-inflammatory properties, it can cause stomach pain, nausea, and diarrhea in sensitive individuals. People with certain autoimmune diseases should use it cautiously. ### White Willow Bark Similar to aspirin, White Willow Bark may cause stomach ulcers, bleeding, or allergic reactions in individuals sensitive to salicylates. It's recommended to avoid using this if you have aspirin intolerance. ### Collavant n2 Type 2 Collagen Type 2 Collagen is usually safe but can occasionally lead to gastrointestinal side effects or allergic reactions, particularly in individuals with a history of food allergies. ### Boron Though an essential mineral, excessive boron intake can lead to skin inflammation, irritability, tremors, or digestive issues. Users need to consult a healthcare provider before beginning a new supplement routine, especially if they're on medication, to avoid any potential interactions or side effects. ## Advantages Of Flexafen ### Enhanced Joint Flexibility and Mobility Flexafen's advanced formula boosts collagen and synovial fluid, vital for joint health. Enhancing mobility and flexibility enables users to move more freely and comfortably. ### Temporary Relief for Occasional Aches and Discomfort The specific nutrients in Flexafen, like MSM and White Willow Bark, are known to provide short-term relief from occasional aches and discomfort, helping you lead a more comfortable and active life. ## **[Click Here To Buy Now From Official Website of Flexafen](https://capsules24x7.com/flexafen-us)**
ADT109119/llama3-8b-Instruct-int8.flm
ADT109119
"2024-06-14T06:36:29Z"
0
0
null
[ "license:apache-2.0", "region:us" ]
null
"2024-06-14T05:44:20Z"
--- license: apache-2.0 --- fastllm model for llama3-8b-Instruct-int8 Github address: https://github.com/ztxz16/fastllm build by [The Walking Fish步行魚](https://www.youtube.com/@the_walking_fish)
manbeast3b/ZZZZZZZZZZZtest25
manbeast3b
"2024-06-14T05:47:11Z"
0
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2024-06-14T05:45:09Z"
Entry not found
PavanNaik111/gpt2-suggestions-str8bat
PavanNaik111
"2024-06-14T05:47:50Z"
0
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-06-14T05:47:28Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
dongle94/gen
dongle94
"2024-06-14T05:50:02Z"
0
0
null
[ "license:mit", "region:us" ]
null
"2024-06-14T05:49:16Z"
--- license: mit ---
twstella/llama-3-Korean-Bllossom-q4f16-MLC
twstella
"2024-06-14T07:35:00Z"
0
0
null
[ "region:us" ]
null
"2024-06-14T05:51:42Z"
Entry not found
henhenhahi111112/henhenhahimodel-7b
henhenhahi111112
"2024-06-19T10:02:13Z"
0
0
null
[ "tensorboard", "region:us" ]
null
"2024-06-14T05:55:26Z"
# OpenLM ## Environment ```shell cd henhenhahimodel pip install -r requirements.txt ``` ## Usage ```shell python3 scripts/generate.py --model open_lm_7b --checkpoint logs/test_alpaca_7b_1p25_240612/checkpoints/epoch_1.pt --positional-embedding-type rotary --input-text '{"instruction":"Using the provided data, what is the most common pet in this household?","input":"The household has 3 cats, 2 dogs, and 1 rabbit."}' ```
HeavyDriver/DollLike
HeavyDriver
"2024-06-14T05:56:30Z"
0
0
null
[ "region:us" ]
null
"2024-06-14T05:56:10Z"
Entry not found
HeavyDriver/BreastHelper
HeavyDriver
"2024-06-14T06:00:13Z"
0
0
null
[ "region:us" ]
null
"2024-06-14T05:59:39Z"
Entry not found
vishruthnath/codellama_7b_nl2code_ft
vishruthnath
"2024-06-14T06:53:26Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-06-14T06:00:18Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ErnestBeckham/SC-ResViT
ErnestBeckham
"2024-06-19T19:53:38Z"
0
0
null
[ "region:us" ]
null
"2024-06-14T06:02:25Z"
Entry not found
kevinchen123/Qwen-Qwen1.5-0.5B-1718344994
kevinchen123
"2024-06-14T06:03:18Z"
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:Qwen/Qwen1.5-0.5B", "region:us" ]
null
"2024-06-14T06:03:15Z"
--- library_name: peft base_model: Qwen/Qwen1.5-0.5B --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.11.1
HyperdustProtocol/ImHyperAGI-llama2-7b-977
HyperdustProtocol
"2024-06-14T06:06:37Z"
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-2-7b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-14T06:06:28Z"
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl base_model: unsloth/llama-2-7b-bnb-4bit --- # Uploaded model - **Developed by:** HyperdustProtocol - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-2-7b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Moo/mmo
Moo
"2024-06-14T06:07:22Z"
0
0
null
[ "region:us" ]
null
"2024-06-14T06:07:21Z"
Entry not found
EmerySchaefer/EmerySchaefer
EmerySchaefer
"2024-06-14T06:09:49Z"
0
0
null
[ "region:us" ]
null
"2024-06-14T06:09:49Z"
Entry not found
mobileimagesnz/themaster_model
mobileimagesnz
"2024-06-14T06:23:46Z"
0
0
null
[ "region:us" ]
null
"2024-06-14T06:13:57Z"
Entry not found
ishant0121/zephyr-7b-sft-full
ishant0121
"2024-07-02T10:53:32Z"
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "llama", "text-generation", "alignment-handbook", "trl", "sft", "generated_from_trainer", "conversational", "dataset:HuggingFaceH4/ultrachat_200k", "base_model:TinyLlama/TinyLlama-1.1B-step-50K-105b", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-06-14T06:14:45Z"
--- license: apache-2.0 base_model: TinyLlama/TinyLlama-1.1B-step-50K-105b tags: - alignment-handbook - trl - sft - generated_from_trainer - trl - sft - generated_from_trainer datasets: - HuggingFaceH4/ultrachat_200k model-index: - name: zephyr-7b-sft-full results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # zephyr-7b-sft-full This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-step-50K-105b](https://huggingface.co/TinyLlama/TinyLlama-1.1B-step-50K-105b) on the HuggingFaceH4/ultrachat_200k dataset. It achieves the following results on the evaluation set: - Loss: 1.3203 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - total_train_batch_size: 16 - total_eval_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.3518 | 1.0 | 3653 | 1.3203 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.1+cu121 - Datasets 2.19.2 - Tokenizers 0.19.1
mobileimagesnz/myBH
mobileimagesnz
"2024-06-14T06:19:38Z"
0
0
null
[ "region:us" ]
null
"2024-06-14T06:18:56Z"
Entry not found
vishal0524/example-models
vishal0524
"2024-06-14T06:24:52Z"
0
0
null
[ "region:us" ]
null
"2024-06-14T06:22:48Z"
#Initial Phase model example model: --- license: mit ---
r0in/distilbert-base-uncased-finetuned-imdb
r0in
"2024-06-17T05:21:23Z"
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "fill-mask", "generated_from_trainer", "base_model:distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
"2024-06-14T06:24:04Z"
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer model-index: - name: distilbert-base-uncased-finetuned-imdb results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-imdb This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.4894 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.6819 | 1.0 | 157 | 2.4978 | | 2.5872 | 2.0 | 314 | 2.4488 | | 2.527 | 3.0 | 471 | 2.4823 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
mohammad1997/MILG11.01.01
mohammad1997
"2024-06-14T06:30:20Z"
0
0
null
[ "license:openrail", "region:us" ]
null
"2024-06-14T06:27:51Z"
--- license: openrail ---
dongkyun77/wav2vec2-base-timit-demo-colab
dongkyun77
"2024-06-14T06:28:41Z"
0
0
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-06-14T06:28:41Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mi-rei/CT_clL_5e_per_phase
mi-rei
"2024-06-14T07:35:21Z"
0
0
transformers
[ "transformers", "safetensors", "longformer", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-06-14T06:34:07Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
HealTether-Healthcare/llama3-8b-lora-finetuned-v1
HealTether-Healthcare
"2024-06-14T06:36:00Z"
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-Instruct-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-14T06:35:35Z"
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl base_model: unsloth/llama-3-8b-Instruct-bnb-4bit --- # Uploaded model - **Developed by:** HealTether-Healthcare - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
ssnsn/pretrain333
ssnsn
"2024-06-14T06:36:29Z"
0
0
null
[ "license:openrail", "region:us" ]
null
"2024-06-14T06:36:02Z"
--- license: openrail ---
mephistoxfaust/toya
mephistoxfaust
"2024-06-14T06:37:41Z"
0
0
null
[ "license:unknown", "region:us" ]
null
"2024-06-14T06:36:46Z"
--- license: unknown ---
schoonhovenra/20240530
schoonhovenra
"2024-06-14T06:39:43Z"
0
0
transformers
[ "transformers", "safetensors", "detr", "object-detection", "generated_from_trainer", "dataset:imagefolder", "base_model:facebook/detr-resnet-50", "license:apache-2.0", "endpoints_compatible", "region:us" ]
object-detection
"2024-06-14T06:39:33Z"
--- license: apache-2.0 base_model: facebook/detr-resnet-50 tags: - generated_from_trainer datasets: - imagefolder model-index: - name: '20240530' results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 20240530 This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.7211 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:------:|:---------------:| | 1.8627 | 18.35 | 4000 | 1.5604 | | 1.4599 | 36.7 | 8000 | 1.1805 | | 1.2256 | 55.05 | 12000 | 0.9678 | | 1.1121 | 73.39 | 16000 | 0.8867 | | 1.0312 | 91.74 | 20000 | 0.8539 | | 1.016 | 110.09 | 24000 | 0.8169 | | 0.9564 | 128.44 | 28000 | 0.8027 | | 0.9438 | 146.79 | 32000 | 0.7773 | | 0.9099 | 165.14 | 36000 | 0.7705 | | 0.8781 | 183.49 | 40000 | 0.7570 | | 0.8743 | 201.83 | 44000 | 0.7558 | | 0.8581 | 220.18 | 48000 | 0.7424 | | 0.8447 | 238.53 | 52000 | 0.7356 | | 0.8207 | 256.88 | 56000 | 0.7324 | | 0.8018 | 275.23 | 60000 | 0.7266 | | 0.793 | 293.58 | 64000 | 0.7279 | | 0.7987 | 311.93 | 68000 | 0.7250 | | 0.7643 | 330.28 | 72000 | 0.7245 | | 0.7673 | 348.62 | 76000 | 0.7297 | | 0.7509 | 366.97 | 80000 | 0.7169 | | 0.758 | 385.32 | 84000 | 0.7202 | | 0.7355 | 403.67 | 88000 | 0.7180 | | 0.738 | 422.02 | 92000 | 0.7202 | | 0.7296 | 440.37 | 96000 | 0.7229 | | 0.7107 | 458.72 | 100000 | 0.7164 | | 0.6961 | 477.06 | 104000 | 0.7161 | | 0.7096 | 495.41 | 108000 | 0.7156 | | 0.6837 | 513.76 | 112000 | 0.7145 | | 0.7034 | 532.11 | 116000 | 0.7147 | | 0.6868 | 550.46 | 120000 | 0.7201 | | 0.6814 | 568.81 | 124000 | 0.7164 | | 0.6896 | 587.16 | 128000 | 0.7167 | | 0.6809 | 605.5 | 132000 | 0.7149 | | 0.6583 | 623.85 | 136000 | 0.7196 | | 0.6696 | 642.2 | 140000 | 0.7185 | | 0.6704 | 660.55 | 144000 | 0.7156 | | 0.6761 | 678.9 | 148000 | 0.7235 | | 0.6577 | 697.25 | 152000 | 0.7207 | | 0.6649 | 715.6 | 156000 | 0.7211 | | 0.6589 | 733.94 | 160000 | 0.7203 | | 0.6461 | 752.29 | 164000 | 0.7190 | | 0.6406 | 770.64 | 168000 | 0.7213 | | 0.638 | 788.99 | 172000 | 0.7191 | | 0.6523 | 807.34 | 176000 | 0.7232 | | 0.6336 | 825.69 | 180000 | 0.7177 | | 0.6382 | 844.04 | 184000 | 0.7199 | | 0.6394 | 862.39 | 188000 | 0.7241 | | 0.6406 | 880.73 | 192000 | 0.7239 | | 0.6366 | 899.08 | 196000 | 0.7226 | | 0.65 | 917.43 | 200000 | 0.7198 | | 0.6382 | 935.78 | 204000 | 0.7198 | | 0.6257 | 954.13 | 208000 | 0.7241 | | 0.6242 | 972.48 | 212000 | 0.7211 | | 0.6405 | 990.83 | 216000 | 0.7211 | ### Framework versions - Transformers 4.37.2 - Pytorch 2.3.0 - Datasets 2.12.0 - Tokenizers 0.15.1
Cyber3ra/SecAI-Llama-2-40
Cyber3ra
"2024-06-14T06:55:31Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-06-14T06:41:00Z"
Entry not found
kataragi/controlnetXL-rough-coating
kataragi
"2024-06-14T07:06:04Z"
0
13
null
[ "license:creativeml-openrail-m", "region:us" ]
null
"2024-06-14T06:42:01Z"
--- license: creativeml-openrail-m --- </p> # controlnetXL-rough-coating - これはstable DiffusionXLにおいてラフ塗り画像を参考に色塗りを行うコントロールネットです。プリプロセッサは使用しません。 # 使い方 コントロールネットに線画付きのラフ塗り画像をセットします。プリプロセッサはnoneに設定してください。 学習モデルはanimagineXL3.1です。ebara_pony2.1などでも動作します。全体ラフ塗りから制御もできますが、一部でも動作できるようです。 - ![](test1.png) - ![](test2.png) 参考設定例 weightは0.5、end_stepは0.5くらいがちょうどいいようですが、明確に制御できる設定は存在しません。 - ![](test3.png)
chohtet/qwen2_7b_instruct_lora_r128
chohtet
"2024-06-14T06:43:14Z"
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen2", "trl", "en", "base_model:unsloth/Qwen2-7B-Instruct-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-14T06:42:30Z"
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - qwen2 - trl base_model: unsloth/Qwen2-7B-Instruct-bnb-4bit --- # Uploaded model - **Developed by:** chohtet - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen2-7B-Instruct-bnb-4bit This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
habibi26/ktp-not-ktp-clip
habibi26
"2024-06-14T08:08:05Z"
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "clip", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:openai/clip-vit-base-patch32", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
"2024-06-14T06:43:12Z"
--- base_model: openai/clip-vit-base-patch32 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: ktp-not-ktp-clip results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: validation args: default metrics: - name: Accuracy type: accuracy value: 1.0 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ktp-not-ktp-clip This model is a fine-tuned version of [openai/clip-vit-base-patch32](https://huggingface.co/openai/clip-vit-base-patch32) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0074 - Accuracy: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | No log | 0.9231 | 3 | 0.5222 | 0.6733 | | No log | 1.8462 | 6 | 0.2392 | 0.9208 | | No log | 2.7692 | 9 | 0.1027 | 0.9703 | | 0.4711 | 4.0 | 13 | 0.2471 | 0.8911 | | 0.4711 | 4.9231 | 16 | 0.0559 | 0.9901 | | 0.4711 | 5.8462 | 19 | 0.0441 | 0.9901 | | 0.1979 | 6.7692 | 22 | 0.0818 | 0.9802 | | 0.1979 | 8.0 | 26 | 0.0772 | 0.9802 | | 0.1979 | 8.9231 | 29 | 0.1827 | 0.9703 | | 0.1414 | 9.8462 | 32 | 0.0894 | 0.9802 | | 0.1414 | 10.7692 | 35 | 0.0551 | 0.9802 | | 0.1414 | 12.0 | 39 | 0.0125 | 1.0 | | 0.0699 | 12.9231 | 42 | 0.0119 | 1.0 | | 0.0699 | 13.8462 | 45 | 0.0074 | 1.0 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.1.2 - Datasets 2.19.2 - Tokenizers 0.19.1
chohtet/qwen2_7b_instruct_4bit_r128
chohtet
"2024-06-14T06:45:44Z"
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "conversational", "en", "base_model:unsloth/Qwen2-7B-Instruct-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
"2024-06-14T06:43:32Z"
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - qwen2 - trl - sft base_model: unsloth/Qwen2-7B-Instruct-bnb-4bit --- # Uploaded model - **Developed by:** chohtet - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen2-7B-Instruct-bnb-4bit This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
YeBhoneLin10/Simbolo_ChatBot
YeBhoneLin10
"2024-06-14T06:44:06Z"
0
0
null
[ "region:us" ]
null
"2024-06-14T06:44:06Z"
Entry not found
onizukal/Boya1_3Class_SGD_1e4_20Epoch_Beit-large-224_fold4
onizukal
"2024-06-14T17:44:38Z"
0
0
transformers
[ "transformers", "pytorch", "beit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:microsoft/beit-large-patch16-224", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
"2024-06-14T06:44:19Z"
--- license: apache-2.0 base_model: microsoft/beit-large-patch16-224 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: Boya1_3Class_SGD_1e4_20Epoch_Beit-large-224_fold4 results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: test args: default metrics: - name: Accuracy type: accuracy value: 0.5666666666666667 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Boya1_3Class_SGD_1e4_20Epoch_Beit-large-224_fold4 This model is a fine-tuned version of [microsoft/beit-large-patch16-224](https://huggingface.co/microsoft/beit-large-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.0791 - Accuracy: 0.5667 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.1102 | 1.0 | 923 | 1.1585 | 0.5447 | | 1.0847 | 2.0 | 1846 | 1.1164 | 0.5585 | | 1.0858 | 3.0 | 2769 | 1.0944 | 0.5634 | | 1.1026 | 4.0 | 3692 | 1.0826 | 0.5650 | | 1.1357 | 5.0 | 4615 | 1.0791 | 0.5667 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.0.1 - Datasets 2.12.0 - Tokenizers 0.13.2
onizukal/Karma_3Class_RMSprop_1e5_20Epoch_Beit-large-224_fold3
onizukal
"2024-06-14T17:40:37Z"
0
0
transformers
[ "transformers", "pytorch", "beit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:microsoft/beit-large-patch16-224", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
"2024-06-14T06:44:21Z"
--- license: apache-2.0 base_model: microsoft/beit-large-patch16-224 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: Karma_3Class_RMSprop_1e5_20Epoch_Beit-large-224_fold3 results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: test args: default metrics: - name: Accuracy type: accuracy value: 0.8553936450111314 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Karma_3Class_RMSprop_1e5_20Epoch_Beit-large-224_fold3 This model is a fine-tuned version of [microsoft/beit-large-patch16-224](https://huggingface.co/microsoft/beit-large-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.7383 - Accuracy: 0.8554 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.4248 | 1.0 | 2467 | 0.4024 | 0.8380 | | 0.3093 | 2.0 | 4934 | 0.3847 | 0.8552 | | 0.1192 | 3.0 | 7401 | 0.5222 | 0.8533 | | 0.1199 | 4.0 | 9868 | 0.6854 | 0.8465 | | 0.1174 | 5.0 | 12335 | 0.9930 | 0.8524 | | 0.0001 | 6.0 | 14802 | 1.3492 | 0.8527 | | 0.0001 | 7.0 | 17269 | 1.4598 | 0.8496 | | 0.0667 | 8.0 | 19736 | 1.6952 | 0.8483 | | 0.0022 | 9.0 | 22203 | 1.6924 | 0.8546 | | 0.0175 | 10.0 | 24670 | 1.7383 | 0.8554 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.0.1 - Datasets 2.12.0 - Tokenizers 0.13.2
sushk0317/my_awesome_model
sushk0317
"2024-06-14T06:44:23Z"
0
0
null
[ "region:us" ]
null
"2024-06-14T06:44:23Z"
Entry not found
chainup244/Qwen-Qwen1.5-0.5B-1718347480
chainup244
"2024-06-14T06:44:41Z"
0
0
null
[ "region:us" ]
null
"2024-06-14T06:44:41Z"
Entry not found
ChhayaMehar/Devi
ChhayaMehar
"2024-06-14T06:45:19Z"
0
0
null
[ "region:us" ]
null
"2024-06-14T06:45:19Z"
Entry not found
chainup244/Qwen-Qwen1.5-1.8B-1718347565
chainup244
"2024-06-14T06:46:07Z"
0
0
null
[ "region:us" ]
null
"2024-06-14T06:46:07Z"
Entry not found
derek33125/project_angel_GLM4
derek33125
"2024-06-14T06:46:50Z"
0
0
null
[ "license:apache-2.0", "region:us" ]
null
"2024-06-14T06:46:50Z"
--- license: apache-2.0 ---
freyza/ichiusa
freyza
"2024-07-02T01:05:09Z"
0
0
null
[ "region:us" ]
null
"2024-06-14T06:46:52Z"
Entry not found
onizukal/Karma_3Class_3Class_Adamax_1e4_20Epoch_Beit-large-224_fold3
onizukal
"2024-06-14T17:41:28Z"
0
0
transformers
[ "transformers", "pytorch", "beit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:microsoft/beit-large-patch16-224", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
"2024-06-14T06:50:30Z"
--- license: apache-2.0 base_model: microsoft/beit-large-patch16-224 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: Karma_3Class_3Class_Adamax_1e4_20Epoch_Beit-large-224_fold3 results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: test args: default metrics: - name: Accuracy type: accuracy value: 0.8452742359846185 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Karma_3Class_3Class_Adamax_1e4_20Epoch_Beit-large-224_fold3 This model is a fine-tuned version of [microsoft/beit-large-patch16-224](https://huggingface.co/microsoft/beit-large-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.6691 - Accuracy: 0.8453 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.4597 | 1.0 | 2467 | 0.3981 | 0.8379 | | 0.3536 | 2.0 | 4934 | 0.3996 | 0.8368 | | 0.1795 | 3.0 | 7401 | 0.4872 | 0.8467 | | 0.1625 | 4.0 | 9868 | 0.6122 | 0.8475 | | 0.1107 | 5.0 | 12335 | 0.9789 | 0.8460 | | 0.0003 | 6.0 | 14802 | 1.0818 | 0.8494 | | 0.0149 | 7.0 | 17269 | 1.4834 | 0.8465 | | 0.0 | 8.0 | 19736 | 1.5090 | 0.8474 | | 0.0 | 9.0 | 22203 | 1.5763 | 0.8462 | | 0.001 | 10.0 | 24670 | 1.6691 | 0.8453 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.0.1 - Datasets 2.12.0 - Tokenizers 0.13.2
chainup244/Qwen-Qwen1.5-7B-1718347831
chainup244
"2024-06-14T06:50:36Z"
0
0
null
[ "region:us" ]
null
"2024-06-14T06:50:36Z"
Entry not found
lone682/sd3
lone682
"2024-06-14T07:53:59Z"
0
0
null
[ "region:us" ]
null
"2024-06-14T06:50:42Z"
Entry not found
Mineru/result-gemma-2b-it
Mineru
"2024-06-16T07:28:22Z"
0
0
null
[ "tensorboard", "safetensors", "region:us" ]
null
"2024-06-14T06:51:13Z"
Entry not found
ssnsn/pretrain3332
ssnsn
"2024-06-14T06:51:45Z"
0
0
null
[ "license:openrail", "region:us" ]
null
"2024-06-14T06:51:14Z"
--- license: openrail ---
gavincyi/tmp_trainer
gavincyi
"2024-06-14T06:53:16Z"
0
0
null
[ "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "dataset:generator", "base_model:facebook/opt-350m", "license:other", "region:us" ]
null
"2024-06-14T06:53:05Z"
--- license: other base_model: facebook/opt-350m tags: - trl - sft - generated_from_trainer datasets: - generator model-index: - name: tmp_trainer results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tmp_trainer This model is a fine-tuned version of [facebook/opt-350m](https://huggingface.co/facebook/opt-350m) on the generator dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.1+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
dwb2023/llama38binstruct_summarize_v3
dwb2023
"2024-06-14T06:59:29Z"
0
0
peft
[ "peft", "tensorboard", "safetensors", "llama", "trl", "sft", "generated_from_trainer", "dataset:generator", "base_model:NousResearch/Meta-Llama-3-8B-Instruct", "license:other", "4-bit", "bitsandbytes", "region:us" ]
null
"2024-06-14T06:55:50Z"
--- license: other library_name: peft tags: - trl - sft - generated_from_trainer base_model: NousResearch/Meta-Llama-3-8B-Instruct datasets: - generator model-index: - name: llama38binstruct_summarize_v3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # llama38binstruct_summarize_v3 This model is a fine-tuned version of [NousResearch/Meta-Llama-3-8B-Instruct](https://huggingface.co/NousResearch/Meta-Llama-3-8B-Instruct) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 1.8062 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_steps: 0.03 - training_steps: 100 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.4934 | 1.25 | 25 | 1.6876 | | 0.4435 | 2.5 | 50 | 1.7002 | | 0.2128 | 3.75 | 75 | 1.7664 | | 0.1355 | 5.0 | 100 | 1.8062 | ### Framework versions - PEFT 0.11.1 - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
Gianmsk/noel
Gianmsk
"2024-06-14T06:58:59Z"
0
0
null
[ "region:us" ]
null
"2024-06-14T06:58:45Z"
Entry not found
eastwind/gpt2-audio-tiny-sherlock-100k-overfit
eastwind
"2024-06-14T18:12:42Z"
0
1
null
[ "region:us" ]
null
"2024-06-14T06:59:11Z"
https://github.com/nivibilla/build-nanogpt/tree/audio
MorphosDynamics/CO-OPR8
MorphosDynamics
"2024-06-14T06:59:16Z"
0
0
null
[ "license:apache-2.0", "region:us" ]
null
"2024-06-14T06:59:16Z"
--- license: apache-2.0 ---
leo-sai/test-model
leo-sai
"2024-06-14T07:00:12Z"
0
0
null
[ "region:us" ]
null
"2024-06-14T07:00:12Z"
Entry not found