modelId
stringlengths
5
122
author
stringlengths
2
42
last_modified
unknown
downloads
int64
0
708M
likes
int64
0
10.9k
library_name
stringclasses
236 values
tags
sequencelengths
1
2.16k
pipeline_tag
stringclasses
48 values
createdAt
unknown
card
stringlengths
1
901k
sumya24/wav2vec2-conformer-rel-pos-large-speech-commands
sumya24
"2024-06-18T14:02:00Z"
0
0
null
[ "region:us" ]
null
"2024-06-18T14:02:00Z"
Entry not found
sharvaanit/mistral-7b-style
sharvaanit
"2024-06-18T14:04:11Z"
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-06-18T14:03:45Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
cross-ling-know/llama3-8b-wiki2-mixed-lang-sentence
cross-ling-know
"2024-06-18T14:48:03Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-06-18T14:07:13Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
elymatos/ellipsis
elymatos
"2024-06-18T14:08:26Z"
0
0
null
[ "license:gpl-3.0", "region:us" ]
null
"2024-06-18T14:08:26Z"
--- license: gpl-3.0 ---
wdli/test
wdli
"2024-06-18T14:10:51Z"
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-Instruct-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-18T14:10:44Z"
--- base_model: unsloth/llama-3-8b-Instruct-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl --- # Uploaded model - **Developed by:** wdli - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
jaydeepgami56/mt0-large-ia3
jaydeepgami56
"2024-06-18T14:11:18Z"
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-06-18T14:11:16Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
TalTzur/testing
TalTzur
"2024-06-18T14:12:13Z"
0
0
null
[ "region:us" ]
null
"2024-06-18T14:12:13Z"
Entry not found
cross-ling-know/llama3-8b-wiki2-mixed-lang-sentence8words
cross-ling-know
"2024-06-18T14:48:06Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-06-18T14:12:23Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Alkovika/Sova
Alkovika
"2024-06-18T14:13:22Z"
0
0
null
[ "region:us" ]
null
"2024-06-18T14:13:22Z"
Entry not found
baxtos/gornavik09-3
baxtos
"2024-06-18T14:15:45Z"
0
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2024-06-18T14:14:41Z"
Entry not found
baxtos/gornavik10-3
baxtos
"2024-06-18T14:18:55Z"
0
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2024-06-18T14:17:50Z"
Entry not found
Svyat0074/Language_Model_NIR
Svyat0074
"2024-06-18T16:17:49Z"
0
0
null
[ "license:llama2", "region:us" ]
null
"2024-06-18T14:18:44Z"
--- license: llama2 ---
eepol/Sumie
eepol
"2024-06-18T14:21:07Z"
0
0
null
[ "license:mit", "region:us" ]
null
"2024-06-18T14:19:50Z"
--- license: mit ---
SouravModak/instruct-pix2pix-model
SouravModak
"2024-06-18T14:20:23Z"
0
0
null
[ "region:us" ]
null
"2024-06-18T14:20:23Z"
Entry not found
jointriple/brand_classification_1_20240618_tokenizer_3
jointriple
"2024-06-18T14:21:12Z"
0
0
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:eu" ]
null
"2024-06-18T14:21:09Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
pt-sk/llama_python
pt-sk
"2024-06-19T08:28:08Z"
0
0
null
[ "license:mit", "region:us" ]
null
"2024-06-18T14:23:55Z"
--- license: mit ---
raidavid/whisper-small-ip-28-have-opendata_20240618_v3_downleaner
raidavid
"2024-06-18T19:59:53Z"
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
"2024-06-18T14:25:37Z"
Entry not found
Piotrasz/Llama-2-7b-hf-ROME-50-en
Piotrasz
"2024-06-18T14:56:38Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-06-18T14:30:19Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ishitaunde/distilbert-base-uncased-finetuned-imdb
ishitaunde
"2024-06-18T14:30:40Z"
0
0
null
[ "region:us" ]
null
"2024-06-18T14:30:40Z"
Entry not found
svercoutere/llama-3-8b-instruct-abb-lora
svercoutere
"2024-06-18T14:33:07Z"
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "en", "nl", "dataset:svercoutere/llama3_abb_instruct_dataset", "base_model:unsloth/llama-3-8b-Instruct-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-18T14:31:20Z"
--- language: - en - nl license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama base_model: unsloth/llama-3-8b-Instruct-bnb-4bit datasets: - svercoutere/llama3_abb_instruct_dataset --- # LLaMA-3-8B-Instruct LoRA Finetuned Model for ABB General Breakdown of the ABB-LLM Model ## Motivation As a Tool for Translation, Summarization, QA Tasks: The ABB-LLM model is designed to handle tasks that require the generation of new text, such as translation, summarization, and question-answering (QA). As a Baseline for Classification, Named Entity Recognition (NER), and Other Tasks: For tasks that involve understanding and processing text, such as classification and NER, this model provides a solid baseline. ## Long Term vision Custom Model Training: When enough data is available, custom models should be trained for specific tasks. This approach is more efficient and yields better performance than using a general-purpose LLM (like this one). Fine-Tuning Specialized Models: Models like BERT, RoBERTa, etc., should be fine-tuned for specific tasks like classification and NER, which will outperform small LLMs on these tasks. ## What to Expect? Limitations: Current 8B models are inadequate for QA tasks due to higher rates of hallucination and lower accuracy. Therefore, it is advised to use small models for summarization, translation, and classification tasks. Context-Based Tasks: For tasks that rely on provided context (such as documents or search results), small models can be effective. These tasks include summarization, translation, classification, and NER. Output Format: This model is trained to return JSON output, which is more structured and easier to work with compared to the verbose default output of the base 8B model. ## Use Cases The ABB-LLM model is suitable for various tasks where context or facts are provided as context. These include: Summarization: Generate concise summaries of any text, such as agenda items or BPMN files. Translation: Perform simple translations of text, including agenda items and BPMN files. Classification: Classify text into predefined hierarchies, such as categorizing agenda items or BPMN files. Named Entity Recognition (NER): Extract entities from text, useful for identifying key information in agenda items or BPMN files. Keyword Extraction: Extract relevant keywords from text, aiding in the identification of important terms in agenda items or BPMN files. ## Datasets: The ABB-LLM model is trained on the [svercoutere/llama3_abb_instruct_dataset](svercoutere/llama3_abb_instruct_dataset), which uses the following format: \#### Context: {Dutch text documents, JSON objects, ...} \#### {task to be performed with the context} Examples of these tasks can be found within the dataset.
AmberYifan/spin-margin2
AmberYifan
"2024-06-18T15:30:32Z"
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "mistral", "text-generation", "generated_from_trainer", "conversational", "base_model:alignment-handbook/zephyr-7b-sft-full", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-06-18T14:31:29Z"
--- license: apache-2.0 base_model: alignment-handbook/zephyr-7b-sft-full tags: - generated_from_trainer model-index: - name: spin-margin2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # spin-margin2 This model is a fine-tuned version of [alignment-handbook/zephyr-7b-sft-full](https://huggingface.co/alignment-handbook/zephyr-7b-sft-full) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0010 - Rewards/real: -0.7975 - Rewards/generated: -20.4822 - Rewards/accuracies: 1.0 - Rewards/margins: 19.6846 - Logps/generated: -303.8466 - Logps/real: -141.0674 - Logits/generated: -2.6068 - Logits/real: -2.3492 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - total_train_batch_size: 32 - total_eval_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/real | Rewards/generated | Rewards/accuracies | Rewards/margins | Logps/generated | Logps/real | Logits/generated | Logits/real | |:-------------:|:-----:|:----:|:---------------:|:------------:|:-----------------:|:------------------:|:---------------:|:---------------:|:----------:|:----------------:|:-----------:| | 0.0043 | 0.19 | 100 | 0.0049 | 0.9120 | -9.6012 | 1.0 | 10.5132 | -195.0367 | -123.9721 | -2.7982 | -2.5652 | | 0.0034 | 0.39 | 200 | 0.0024 | -0.0739 | -14.1834 | 1.0 | 14.1095 | -240.8593 | -133.8314 | -2.8109 | -2.5347 | | 0.0007 | 0.58 | 300 | 0.0012 | -0.2381 | -16.9127 | 1.0 | 16.6746 | -268.1524 | -135.4731 | -2.7308 | -2.4046 | | 0.0016 | 0.78 | 400 | 0.0010 | -1.1878 | -19.5719 | 1.0 | 18.3841 | -294.7439 | -144.9703 | -2.6559 | -2.3917 | | 0.0001 | 0.97 | 500 | 0.0010 | -0.7975 | -20.4822 | 1.0 | 19.6846 | -303.8466 | -141.0674 | -2.6068 | -2.3492 | ### Framework versions - Transformers 4.37.0 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.15.2
jkk58/01
jkk58
"2024-06-18T14:31:57Z"
0
0
null
[ "license:lgpl-3.0", "region:us" ]
null
"2024-06-18T14:31:57Z"
--- license: lgpl-3.0 ---
okeokaoke/dataworld
okeokaoke
"2024-06-18T14:33:50Z"
0
0
null
[ "region:us" ]
null
"2024-06-18T14:33:02Z"
import datadotworld as dw ds = dw.load_dataset('jonloyens/intermediate-data-world', auto_update=True) shootings_df = ds.dataframes['fatal-police-shootings-data']
Roselia-penguin/medical_LLaMA3-8B-Chinese-Chat_8-bit-quantization
Roselia-penguin
"2024-06-18T18:40:09Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "medical", "llama-factory", "conversational", "arxiv:1910.09700", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-06-18T14:35:42Z"
--- license: apache-2.0 tags: - medical - llama-factory metrics: - bleu --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
dalekirkwood/testmodel
dalekirkwood
"2024-06-18T14:36:30Z"
0
0
null
[ "region:us" ]
null
"2024-06-18T14:36:30Z"
Entry not found
loringw/example-model
loringw
"2024-06-18T15:00:52Z"
0
0
null
[ "arxiv:1910.09700", "license:mit", "region:us" ]
null
"2024-06-18T14:37:25Z"
--- # My First Model license: mit --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
willoooooooo/medical_Gemma-1.1-7B-Chat_none-quantization
willoooooooo
"2024-06-18T14:45:36Z"
0
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "llama-factory", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-06-18T14:37:39Z"
--- library_name: transformers tags: - llama-factory --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
minhdang1/vit-base-patch16-224-finetuned-eurosat
minhdang1
"2024-06-18T14:59:28Z"
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:google/vit-base-patch16-224", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
"2024-06-18T14:40:07Z"
--- license: apache-2.0 base_model: google/vit-base-patch16-224 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: vit-base-patch16-224-finetuned-eurosat results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: test args: default metrics: - name: Accuracy type: accuracy value: 0.8446601941747572 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-patch16-224-finetuned-eurosat This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.3894 - Accuracy: 0.8447 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 512 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 5 | 1.0761 | 0.5469 | | 1.1435 | 2.0 | 10 | 0.6466 | 0.7735 | | 1.1435 | 3.0 | 15 | 0.4962 | 0.8123 | | 0.5372 | 4.0 | 20 | 0.4365 | 0.8252 | | 0.5372 | 5.0 | 25 | 0.4118 | 0.8382 | | 0.362 | 6.0 | 30 | 0.4031 | 0.8414 | | 0.362 | 7.0 | 35 | 0.3944 | 0.8511 | | 0.3028 | 8.0 | 40 | 0.3930 | 0.8414 | | 0.3028 | 9.0 | 45 | 0.3928 | 0.8479 | | 0.2708 | 10.0 | 50 | 0.3894 | 0.8447 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu121 - Datasets 2.14.4 - Tokenizers 0.14.1
shuyuej/MedLLaMA3-70B-English
shuyuej
"2024-06-20T15:12:58Z"
0
0
null
[ "safetensors", "license:apache-2.0", "region:us" ]
null
"2024-06-18T14:40:36Z"
--- license: apache-2.0 ---
DLI-Lab/camel
DLI-Lab
"2024-06-18T15:50:45Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "license:gpl", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-06-18T14:42:02Z"
--- license: gpl ---
wdli/llama3-instruct_soda_lora_1_06181015
wdli
"2024-06-18T17:05:41Z"
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "dataset:wdli/soda_dialogue_llama3", "base_model:unsloth/llama-3-8b-Instruct-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-18T14:42:07Z"
--- base_model: unsloth/llama-3-8b-Instruct-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl datasets: - wdli/soda_dialogue_llama3 --- # Uploaded model - **Developed by:** wdli - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
Wangf3014/Mamba-Reg
Wangf3014
"2024-06-18T15:36:49Z"
0
0
null
[ "license:unknown", "region:us" ]
null
"2024-06-18T14:42:08Z"
--- license: unknown --- Official models of "Mamba-r: Vision Mamba ALSO needs registers".
jointriple/brand_classification_1_20240618_tokenizer_4
jointriple
"2024-06-18T14:42:46Z"
0
0
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:eu" ]
null
"2024-06-18T14:42:43Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
anamikac2708/Mistral-7B-DORA-finetuned-investopedia-Lora-Adapters
anamikac2708
"2024-06-18T15:53:42Z"
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "gemma", "trl", "finlang", "dora", "en", "arxiv:2402.09353", "arxiv:2404.18796", "base_model:mistralai/Mistral-7B-v0.1", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
null
"2024-06-18T14:47:58Z"
--- language: - en license: cc-by-nc-4.0 tags: - text-generation-inference - transformers - unsloth - gemma - trl - finlang - dora base_model: mistralai/Mistral-7B-v0.1 --- # Uploaded model - **Developed by:** anamikac2708 - **License:** cc-by-nc-4.0 - **Finetuned from model :** mistralai/Mistral-7B-v0.1 This Mistral model was trained Huggingface's TRL library and DoRA (https://arxiv.org/abs/2402.09353) using open-sourced finance dataset https://huggingface.co/datasets/FinLang/investopedia-instruction-tuning-dataset developed for finance application by FinLang Team This paper proposes Weight-Decomposed LowRank Adaptation which decomposes the pre-trained weight into two components, magnitude and direction, for fine-tuning, specifically employing LoRA for directional updates to efficiently minimize the number of trainable parameters. Therefore can enhance both the learning capacity and training stability of LoRA while avoiding any additional inference overhead. ## How to Get Started with the Model <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ```python import torch from unsloth import FastLanguageModel from transformers import AutoTokenizer, pipeline peft_model_id = "anamikac2708/Mistral-7B-DORA-finetuned-investopedia-Lora-Adapters" # Load Model with PEFT adapter model = AutoPeftModelForCausalLM.from_pretrained( peft_model_id, device_map="auto", torch_dtype=torch.float16, #load_in_4bit = True ) tokenizer = AutoTokenizer.from_pretrained(peft_model_id) pipe = pipeline("text-generation", model=model, tokenizer=tokenizer) example = [{'content': 'You are a financial expert and you can answer any questions related to finance. You will be given a context and a question. Understand the given context and\n try to answer. Users will ask you questions in English and you will generate answer based on the provided CONTEXT.\n CONTEXT:\n D. in Forced Migration from the University of the Witwatersrand (Wits) in Johannesburg, South Africa; A postgraduate diploma in Folklore & Cultural Studies at Indira Gandhi National Open University (IGNOU) in New Delhi, India; A Masters of International Affairs at Columbia University; A BA from Barnard College at Columbia University\n', 'role': 'system'}, {'content': ' In which universities did the individual obtain their academic qualifications?\n', 'role': 'user'}, {'content': ' University of the Witwatersrand (Wits) in Johannesburg, South Africa; Indira Gandhi National Open University (IGNOU) in New Delhi, India; Columbia University; Barnard College at Columbia University.', 'role': 'assistant'}] prompt = pipe.tokenizer.apply_chat_template(example[:2], tokenize=False, add_generation_prompt=True) outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.1, top_k=50, top_p=0.1, eos_token_id=pipe.tokenizer.eos_token_id, pad_token_id=pipe.tokenizer.pad_token_id) print(f"Query:\n{example[1]['content']}") print(f"Context:\n{example[0]['content']}") print(f"Original Answer:\n{example[2]['content']}") print(f"Generated Answer:\n{outputs[0]['generated_text'][len(prompt):].strip()}") ``` ## Training Details ``` Peft Config : { 'Technqiue' : 'QLORA', 'rank': 256, 'target_modules' : ["q_proj", "k_proj", "v_proj", "o_proj","gate_proj", "up_proj", "down_proj",], 'lora_alpha' : 128, 'lora_dropout' : 0, 'bias': "none", } Hyperparameters: { "epochs": 3, "evaluation_strategy": "epoch", "gradient_checkpointing": True, "max_grad_norm" : 0.3, "optimizer" : "adamw_torch_fused", "learning_rate" : 2e-5, "lr_scheduler_type": "constant", "warmup_ratio" : 0.03, "per_device_train_batch_size" : 4, "per_device_eval_batch_size" : 4, "gradient_accumulation_steps" : 4 } ``` ## Model was trained on 1xA100 80GB, below loss and memory consmuption details: {'eval_loss': 0.946821391582489, 'eval_runtime': 840.1526, 'eval_samples_per_second': 0.801, 'eval_steps_per_second': 0.401, 'epoch': 3.0} {'train_runtime': 64796.4597, 'train_samples_per_second': 0.246, 'train_steps_per_second': 0.031, 'train_loss': 0.709615581515563, 'epoch': 3.0} ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> We evaluated the model on test set (sample 1k) https://huggingface.co/datasets/FinLang/investopedia-instruction-tuning-dataset. Evaluation was done using Proprietary LLMs as jury on four criteria Correctness, Faithfullness, Clarity, Completeness on scale of 1-5 (1 being worst & 5 being best) inspired by the paper Replacing Judges with Juries https://arxiv.org/abs/2404.18796. Model got an average score of 4.48. Average inference speed of the model is 37 secs. Human Evaluation is in progress to see the percentage of alignment between human and LLM. ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> This model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance. It does not have any moderation mechanisms. We're looking into ways to make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs. ## License Since non-commercial datasets are used for fine-tuning, we release this model as cc-by-nc-4.0.
Dex-X/rag
Dex-X
"2024-06-18T14:48:48Z"
0
0
null
[ "license:apache-2.0", "region:us" ]
null
"2024-06-18T14:48:48Z"
--- license: apache-2.0 ---
LeRedox/redox
LeRedox
"2024-06-18T14:52:48Z"
0
0
null
[ "region:us" ]
null
"2024-06-18T14:52:48Z"
Entry not found
lilia0738/medical_ChineseLLaMA2-7B-Chat_none-quantization
lilia0738
"2024-06-18T15:01:40Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "llama-factory", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-06-18T14:54:53Z"
--- library_name: transformers tags: - llama-factory --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
th041/vit-weldclassifyv3
th041
"2024-06-18T15:19:48Z"
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:google/vit-base-patch16-224-in21k", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
"2024-06-18T14:55:24Z"
--- license: apache-2.0 base_model: google/vit-base-patch16-224-in21k tags: - image-classification - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: vit-weldclassifyv3 results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.920863309352518 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-weldclassifyv3 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.2671 - Accuracy: 0.9209 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 13 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | 0.8398 | 0.6410 | 100 | 1.0312 | 0.5036 | | 0.5613 | 1.2821 | 200 | 0.7068 | 0.6619 | | 0.4296 | 1.9231 | 300 | 0.4008 | 0.8309 | | 0.3475 | 2.5641 | 400 | 0.3345 | 0.8813 | | 0.1183 | 3.2051 | 500 | 0.4293 | 0.8489 | | 0.1531 | 3.8462 | 600 | 0.2748 | 0.9137 | | 0.1174 | 4.4872 | 700 | 0.3649 | 0.8813 | | 0.0498 | 5.1282 | 800 | 0.3279 | 0.8921 | | 0.0817 | 5.7692 | 900 | 0.2763 | 0.9353 | | 0.0075 | 6.4103 | 1000 | 0.2671 | 0.9209 | | 0.0265 | 7.0513 | 1100 | 0.3185 | 0.9209 | | 0.0457 | 7.6923 | 1200 | 0.3776 | 0.9101 | | 0.0032 | 8.3333 | 1300 | 0.2835 | 0.9388 | | 0.0027 | 8.9744 | 1400 | 0.5365 | 0.8885 | | 0.0024 | 9.6154 | 1500 | 0.2817 | 0.9460 | | 0.0021 | 10.2564 | 1600 | 0.2890 | 0.9460 | | 0.002 | 10.8974 | 1700 | 0.2934 | 0.9460 | | 0.0019 | 11.5385 | 1800 | 0.2976 | 0.9460 | | 0.0018 | 12.1795 | 1900 | 0.2996 | 0.9460 | | 0.0018 | 12.8205 | 2000 | 0.3006 | 0.9460 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
datvtn/antelopev2
datvtn
"2024-06-18T15:06:54Z"
0
0
null
[ "onnx", "license:apache-2.0", "region:us" ]
null
"2024-06-18T14:57:10Z"
--- license: apache-2.0 ---
Maksiksay/textual_inversion_cat
Maksiksay
"2024-06-18T14:57:17Z"
0
0
null
[ "region:us" ]
null
"2024-06-18T14:57:17Z"
Entry not found
byoussef/MobileNetV4_Conv_Small_TFLite_224
byoussef
"2024-06-19T03:56:28Z"
0
0
timm
[ "timm", "tflite", "image-classification", "MobileNetV4", "dataset:imagenet-1k", "arxiv:2404.10518", "license:apache-2.0", "region:us" ]
image-classification
"2024-06-18T15:00:21Z"
--- tags: - image-classification - timm - MobileNetV4 license: apache-2.0 datasets: - imagenet-1k pipeline_tag: image-classification --- # Model card for MobileNetV4_Conv_Small_TFLite_224 A MobileNet-V4 image classification model. Trained on ImageNet-1k by Ross Wightman. Converted to TFLite Float32 & Float16 formats by Youssef Boulaouane. ## Model Details - **Pytorch Weights:** https://huggingface.co/timm/mobilenetv4_conv_small.e2400_r224_in1k - **Model Type:** Image classification - **Model Stats:** - Params (M): 3.8 - GMACs: 0.2 - Activations (M): 2.0 - Input Shape (1, 224, 224, 3) - **Dataset:** ImageNet-1k - **Papers:** - MobileNetV4 -- Universal Models for the Mobile Ecosystem: https://arxiv.org/abs/2404.10518 - PyTorch Image Models: https://github.com/huggingface/pytorch-image-models - **Original:** https://github.com/tensorflow/models/tree/master/official/vision ## Model Usage ### Image Classification in Python ```python import numpy as np import tensorflow as tf from PIL import Image # Load label file with open('imagenet_classes.txt', 'r') as file: lines = file.readlines() index_to_label = {index: line.strip() for index, line in enumerate(lines)} # Initialize interpreter and IO details tfl_model = tf.lite.Interpreter(model_path=tf_model_path) tfl_model.allocate_tensors() input_details = tfl_model.get_input_details() output_details = tfl_model.get_output_details() # Load and preprocess the image image = Image.open(image_path).resize((224, 224), Image.BICUBIC) image = np.array(image, dtype=np.float32) mean = np.array([0.485, 0.456, 0.406], dtype=np.float32) std = np.array([0.229, 0.224, 0.225], dtype=np.float32) image = (image / 255.0 - mean) / std image = np.expand_dims(image, axis=-1) image = np.rollaxis(image, 3) # Inference and postprocessing input = input_details[0] tfl_model.set_tensor(input["index"], image) tfl_model.invoke() tfl_output = tfl_model.get_tensor(output_details[0]["index"]) tfl_output_tensor = tf.convert_to_tensor(tfl_output) tfl_softmax_output = tf.nn.softmax(tfl_output_tensor, axis=1) tfl_top5_probs, tfl_top5_indices = tf.math.top_k(tfl_softmax_output, k=5) # Get the top5 class labels and probabilities tfl_probs_list = tfl_top5_probs[0].numpy().tolist() tfl_index_list = tfl_top5_indices[0].numpy().tolist() for index, prob in zip(tfl_index_list, tfl_probs_list): print(f"{index_to_label[index]}: {round(prob*100, 2)}%") ``` ### Deployment on Mobile Refer to guides available here: https://ai.google.dev/edge/lite/inference ## Citation ```bibtex @article{qin2024mobilenetv4, title={MobileNetV4-Universal Models for the Mobile Ecosystem}, author={Qin, Danfeng and Leichner, Chas and Delakis, Manolis and Fornoni, Marco and Luo, Shixin and Yang, Fan and Wang, Weijun and Banbury, Colby and Ye, Chengxi and Akin, Berkin and others}, journal={arXiv preprint arXiv:2404.10518}, year={2024} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ```
digiplay/chosen-Mix
digiplay
"2024-06-18T15:10:37Z"
0
0
null
[ "license:other", "region:us" ]
null
"2024-06-18T15:02:01Z"
--- license: other --- Model info: https://civitai.com/models/17148?modelVersionId=125302
Tohrumi/mBART_cc25_finetune_en-vi_translation
Tohrumi
"2024-06-18T18:08:29Z"
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "mbart", "text2text-generation", "translation", "generated_from_trainer", "base_model:facebook/mbart-large-cc25", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
"2024-06-18T15:02:34Z"
--- base_model: facebook/mbart-large-cc25 tags: - translation - generated_from_trainer model-index: - name: mBART_cc25_finetune_en-vi_translation results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mBART_cc25_finetune_en-vi_translation This model is a fine-tuned version of [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.41.2 - Pytorch 2.1.2 - Datasets 2.19.2 - Tokenizers 0.19.1
Xiaolihai/BioMistral-7B_MeDistill_28_biomistral_ep10
Xiaolihai
"2024-06-18T15:02:52Z"
0
0
null
[ "region:us" ]
null
"2024-06-18T15:02:52Z"
Entry not found
feliperafael/amy_yolo_model_pantene
feliperafael
"2024-06-18T15:04:05Z"
0
0
ultralytics
[ "ultralytics", "v8", "ultralyticsplus", "yolov8", "yolo", "vision", "image-classification", "pytorch", "model-index", "region:us" ]
image-classification
"2024-06-18T15:03:46Z"
--- tags: - ultralyticsplus - yolov8 - ultralytics - yolo - vision - image-classification - pytorch library_name: ultralytics library_version: 8.0.239 inference: false model-index: - name: feliperafael/amy_yolo_model_pantene results: - task: type: image-classification metrics: - type: accuracy value: 1 # min: 0.0 - max: 1.0 name: top1 accuracy - type: accuracy value: 1 # min: 0.0 - max: 1.0 name: top5 accuracy --- <div align="center"> <img width="640" alt="feliperafael/amy_yolo_model_pantene" src="https://huggingface.co/feliperafael/amy_yolo_model_pantene/resolve/main/thumbnail.jpg"> </div> ### Supported Labels ``` ['pantene'] ``` ### How to use - Install [ultralyticsplus](https://github.com/fcakyon/ultralyticsplus): ```bash pip install ultralyticsplus==0.0.29 ultralytics==8.0.239 ``` - Load model and perform prediction: ```python from ultralyticsplus import YOLO, postprocess_classify_output # load model model = YOLO('feliperafael/amy_yolo_model_pantene') # set model parameters model.overrides['conf'] = 0.25 # model confidence threshold # set image image = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg' # perform inference results = model.predict(image) # observe results print(results[0].probs) # [0.1, 0.2, 0.3, 0.4] processed_result = postprocess_classify_output(model, result=results[0]) print(processed_result) # {"cat": 0.4, "dog": 0.6} ```
marcossoaresgg/MinMillV4
marcossoaresgg
"2024-06-18T15:04:46Z"
0
0
null
[ "license:openrail", "region:us" ]
null
"2024-06-18T15:03:58Z"
--- license: openrail ---
usamiername1/Alabsi2024
usamiername1
"2024-06-18T15:17:24Z"
0
0
null
[ "region:us" ]
null
"2024-06-18T15:04:34Z"
Entry not found
Hireath/First_Model
Hireath
"2024-06-18T15:17:22Z"
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-06-18T15:06:33Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mengshyu/Llama-3-8B-Instruct-q4f16_0-MLC
mengshyu
"2024-06-18T15:10:56Z"
0
0
null
[ "region:us" ]
null
"2024-06-18T15:07:44Z"
Entry not found
Frixi/Patrick_Star_BFBB
Frixi
"2024-06-18T15:08:42Z"
0
0
null
[ "license:openrail", "region:us" ]
null
"2024-06-18T15:07:49Z"
--- license: openrail ---
matt-suncy/sparse_autoencoder
matt-suncy
"2024-06-18T18:12:53Z"
0
0
null
[ "region:us" ]
null
"2024-06-18T15:08:27Z"
Entry not found
AndrewDOrlov/llama-adapter-v2
AndrewDOrlov
"2024-06-18T15:09:49Z"
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-06-18T15:09:42Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
jdelgado2002/diabetic_retinopathy_detection
jdelgado2002
"2024-06-18T23:41:07Z"
0
0
fastai
[ "fastai", "vision", "image-classification", "en", "base_model:microsoft/resnet-50", "license:mit", "region:us" ]
image-classification
"2024-06-18T15:11:35Z"
--- tags: - fastai - vision - image-classification license: mit language: - en library_name: fastai base_model: microsoft/resnet-50 pipeline_tag: image-classification metrics: - accuracy --- # Model card Try our model [here](https://huggingface.co/spaces/jdelgado2002/proliferative_retinopathy_detection) ## Model description This is an image categorization model that uses restnet-50 as the base model to classify diabetic retinopathy ## Intended uses & limitations Given an image taken using fundus photography this model will identify diabetic retinopathy on a scale of 0 to 4: 0 - No DR 1 - Mild 2 - Moderate 3 - Severe 4 - Proliferative DR ## Training * We trained our model with retina images taken using fundus photography under a variety of imaging conditions. * The training data was gathered for a Kaggle completion by the Asia Pacific Tele-Ophthalmology Society (APTOS) in 2019 * [Training data](https://www.kaggle.com/competitions/aptos2019-blindness-detection/data) * [Training Process](https://www.kaggle.com/code/josemauriciodelgado/proliferative-retinopathy) ## Evaluation Training accuracy - trained for 50 epochs, reaching 83% accuracy within our training data | Epoch | Train Loss | Valid Loss | Accuracy | Error Rate | Time | |-------|------------|------------|----------|------------|-------| | 0 | 1.271288 | 1.351223 | 0.665301 | 0.334699 | 03:47 | | 1 | 1.013268 | 0.742499 | 0.741803 | 0.258197 | 04:12 | | 2 | 0.806825 | 0.687152 | 0.754098 | 0.245902 | 03:42 | | 0 | 0.631816 | 0.533298 | 0.789617 | 0.210383 | 04:22 | | 1 | 0.537469 | 0.457713 | 0.829235 | 0.170765 | 04:23 | | 2 | 0.498419 | 0.515875 | 0.810109 | 0.189891 | 04:20 | | 3 | 0.478353 | 0.511856 | 0.815574 | 0.184426 | 04:13 | | 4 | 0.459457 | 0.475843 | 0.801913 | 0.198087 | 04:17 | ... | 48 | 0.024947 | 0.800241 | 0.840164 | 0.159836 | 03:21 | | 49 | 0.027916 | 0.803851 | 0.838798 | 0.161202 | 03:26 | ![confusion matrix](https://drive.google.com/file/d/1lI7pps03RXTFKYjY_iv4UPeSOhqQhxQB/view) We submitted our model for validation to the [APTOS 2019 Blindness Detection Competition](https://www.kaggle.com/competitions/aptos2019-blindness-detection/submissions#), achieving a private score of 0.869345 ## Trying the model Note: You can easily try our model [here](https://huggingface.co/spaces/jdelgado2002/proliferative_retinopathy_detection) This application uses a trained model to detect the severity of diabetic retinopathy from a given retina image taken using fundus photography. The severity levels are: - 0 - No DR - 1 - Mild - 2 - Moderate - 3 - Severe - 4 - Proliferative DR ### How to Use the Model To use the model, you need to provide an image of the retina taken using fundus photography. The model will then predict the severity of diabetic retinopathy and return a dictionary where the keys are the severity levels and the values are the corresponding probabilities. ### Breakdown of the `app.py` File Here's a breakdown of what the `app.py` file is doing: 1. **Import necessary libraries**: The file starts by importing the necessary libraries. This includes `gradio` for creating the UI, `fastai.vision.all` for loading the trained model, and `skimage` for image processing. 2. **Define helper functions**: The `get_x` and `get_y` functions are defined. These functions are used to get the x and y values from the input dictionary. In this case, the x value is the image and the y value is the diagnosis. 3. **Load the trained model**: The trained model is loaded from the `model.pkl` file using the `load_learner` function from `fastai`. 4. **Define label descriptions**: A dictionary is defined to map label numbers to descriptions. This is used to return descriptions instead of numbers in the prediction result. 5. **Define the prediction function**: The `predict` function is defined. This function takes an image as input, makes a prediction using the trained model, and returns a dictionary where the keys are the severity levels and the values are the corresponding probabilities. 6. **Define title and description**: The title and description of the application are defined. These will be displayed in the Gradio UI. To run the application, you need to create a Gradio interface with the `predict` function as the prediction function, an image as the input, and a label as the output. You can then launch the interface to start the application. ```import gradio as gr from fastai.vision.all import * import skimage # Define the functions to get the x and y values from the input dictionary - in this case, the x value is the image and the y value is the diagnosis # needed to load the model since we defined them during training def get_x(r): return "" def get_y(r): return r['diagnosis'] learn = load_learner('model.pkl') labels = learn.dls.vocab # Define the mapping from label numbers to descriptions label_descriptions = { 0: "No DR", 1: "Mild", 2: "Moderate", 3: "Severe", 4: "Proliferative DR" } def predict(img): img = PILImage.create(img) pred, pred_idx, probs = learn.predict(img) # Use the label_descriptions dictionary to return descriptions instead of numbers return {label_descriptions[labels[i]]: float(probs[i]) for i in range(len(labels))} title = "Diabetic Retinopathy Detection" description = """Detects severity of diabetic retinopathy from a given retina image taken using fundus photography - 0 - No DR 1 - Mild 2 - Moderate 3 - Severe 4 - Proliferative DR """ article = "<p style='text-align: center'><a href='https://www.kaggle.com/code/josemauriciodelgado/proliferative-retinopathy' target='_blank'>Notebook</a></p>" # Get a list of all image paths in the test folder test_folder = "test" # replace with the actual path to your test folder image_paths = [os.path.join(test_folder, img) for img in os.listdir(test_folder) if img.endswith(('.png', '.jpg', '.jpeg'))] gr.Interface( fn=predict, inputs=gr.Image(), outputs=gr.Label(num_top_classes=5), examples=image_paths, # set the examples parameter to the list of image paths article=article, title=title, description=description, ).launch() ``` [source code](https://huggingface.co/spaces/jdelgado2002/proliferative_retinopathy_detection/tree/main)
ragomes/DistilBERT-finetuned-classes
ragomes
"2024-06-18T15:12:02Z"
0
0
null
[ "region:us" ]
null
"2024-06-18T15:12:02Z"
Entry not found
aflah/llama-3-8b-bnb-4bit__Climate-Science-Steps-60
aflah
"2024-06-18T15:13:03Z"
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-18T15:12:47Z"
--- base_model: unsloth/llama-3-8b-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl --- # Uploaded model - **Developed by:** aflah - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
phong12azq/marian-finetuned-kde4-en-to-fr
phong12azq
"2024-06-18T16:25:29Z"
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "marian", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
"2024-06-18T15:14:06Z"
Entry not found
Naveenpoliasetty/llama3-8B-merged-V-small
Naveenpoliasetty
"2024-06-18T15:47:40Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "bitsandbytes", "region:us" ]
text-generation
"2024-06-18T15:14:27Z"
--- license: mit --- ### Model Name: Merged LLaMA 3 (8B) #### Model Type: Merged Language Model #### Description This is my first large language model, created by merging three individual LLaMA 3 models, each with 8 billion parameters, using a linear method. The resulting model combines the strengths of each individual model, enabling it to generate more accurate and informative text. Architecture: The model is based on the LLaMA 3 architecture, which is a transformer-based language model designed for efficient and scalable language understanding. The three individual models were trained on a large corpus of text data and then merged using a linear method to create a single, more powerful model. Parameters: The merged model has a total of 4.65 billion parameters, making it a large and powerful language model capable of handling complex language tasks. Training: The individual models were trained on a large corpus of text data, and the merged model was fine-tuned on a smaller dataset to adapt to the merged architecture. Capabilities: The Merged LLaMA 3 (8B) model is capable of generating human-like text, answering questions, and completing tasks such as language translation, text summarization, and dialogue generation. Limitations: While the model is powerful, it is not perfect and may make mistakes or generate inconsistent text in certain situations. Additionally, the model may not perform well on tasks that require common sense or real-world knowledge. Intended Use: The Merged LLaMA 3 (8B) model is intended for research and development purposes, such as exploring the capabilities of large language models, developing new language-based applications, and improving the state of the art in natural language processing. License: The model is licensed under [MIT License].
mrsarthakgupta/godspeedonnx
mrsarthakgupta
"2024-06-18T15:37:25Z"
0
0
transformers
[ "transformers", "onnx", "clip_vision_model", "endpoints_compatible", "region:us" ]
null
"2024-06-18T15:16:10Z"
Entry not found
aflah/llama-3-8b-bnb-4bit__Climate-Science-Steps-60__Merge-to-16-bit
aflah
"2024-06-18T15:26:34Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2024-06-18T15:17:01Z"
--- base_model: unsloth/llama-3-8b-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl - sft --- # Uploaded model - **Developed by:** aflah - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Autsadin/llama3_rag_chat
Autsadin
"2024-06-18T15:45:31Z"
0
0
transformers
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-06-18T15:17:55Z"
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
TTrs88/test
TTrs88
"2024-06-18T15:18:33Z"
0
0
null
[ "region:us" ]
null
"2024-06-18T15:18:33Z"
Entry not found
haxareh/aaameri
haxareh
"2024-06-18T15:18:45Z"
0
0
null
[ "region:us" ]
null
"2024-06-18T15:18:40Z"
Entry not found
nihil117/semplitv1
nihil117
"2024-06-18T15:19:02Z"
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-18T15:18:51Z"
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl base_model: unsloth/llama-3-8b-bnb-4bit --- # Uploaded model - **Developed by:** nihil117 - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
EmbeddedLLM/01-ai_Yi-1.5-6B-Chat-onnx
EmbeddedLLM
"2024-06-20T12:44:43Z"
0
0
null
[ "onnx", "pytorch", "ONNX", "DirectML", "DML", "conversational", "ONNXRuntime", "custom_code", "text-generation", "en", "license:apache-2.0", "region:us" ]
text-generation
"2024-06-18T15:19:10Z"
--- license: apache-2.0 language: - en pipeline_tag: text-generation tags: - pytorch - ONNX - DirectML - DML - conversational - ONNXRuntime - custom_code --- # Yi-1.5-6B-Chat ONNX models for DirectML This repository hosts the optimized versions of [01-ai/Yi-1.5-6B-Chat](https://huggingface.co/01-ai/Yi-1.5-6B-Chat) to accelerate inference with ONNX Runtime for DirectML. ## Usage on Windows (Intel / AMD / Nvidia / Qualcomm) ```powershell conda create -n onnx python=3.10 conda activate onnx winget install -e --id GitHub.GitLFS pip install huggingface-hub[cli] huggingface-cli download EmbeddedLLM/01-ai_Yi-1.5-6B-Chat-onnx --include=onnx/directml/01-ai_Yi-1.5-6B-Chat-int4 --local-dir .\01-ai_Yi-1.5-6B-Chat-int4 pip install numpy==1.26.4 Invoke-WebRequest -Uri "https://raw.githubusercontent.com/microsoft/onnxruntime-genai/main/examples/python/phi3-qa.py" -OutFile "phi3-qa.py" pip install onnxruntime-directml pip install --pre onnxruntime-genai-directml conda install conda-forge::vs2015_runtime python phi3-qa.py -m .\01-ai_Yi-1.5-6B-Chat-int4 ``` ## What is DirectML DirectML is a high-performance, hardware-accelerated DirectX 12 library for machine learning. DirectML provides GPU acceleration for common machine learning tasks across a broad range of supported hardware and drivers, including all DirectX 12-capable GPUs from vendors such as AMD, Intel, NVIDIA, and Qualcomm.
RyotaKadoya1993/math_adapter2
RyotaKadoya1993
"2024-06-19T11:38:07Z"
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:RyotaKadoya1993/fullymerged_v1_128_gen4", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-18T15:20:07Z"
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl base_model: RyotaKadoya1993/fullymerged_v1_128_gen4 --- # Uploaded model - **Developed by:** RyotaKadoya1993 - **License:** apache-2.0 - **Finetuned from model :** RyotaKadoya1993/fullymerged_v1_128_gen4 This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
eroo36/model
eroo36
"2024-06-18T15:21:11Z"
0
0
null
[ "region:us" ]
null
"2024-06-18T15:21:11Z"
Entry not found
Comfy-AI/natalia-seg-v1-kb
Comfy-AI
"2024-06-18T15:24:42Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-06-18T15:21:51Z"
Entry not found
rafaeloc15/llama3-v6
rafaeloc15
"2024-06-18T15:23:09Z"
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-18T15:23:02Z"
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl base_model: unsloth/llama-3-8b-bnb-4bit --- # Uploaded model - **Developed by:** rafaeloc15 - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Pushkraj123/mistal-model
Pushkraj123
"2024-06-18T15:24:19Z"
0
0
null
[ "license:apache-2.0", "region:us" ]
null
"2024-06-18T15:24:19Z"
--- license: apache-2.0 ---
AFSA1729/movie-classifier
AFSA1729
"2024-06-18T15:31:56Z"
0
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-06-18T15:24:38Z"
--- license: mit ---
RyotaKadoya1993/fullymerged_v4_adapter2
RyotaKadoya1993
"2024-06-18T15:25:07Z"
0
0
transformers
[ "transformers", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-06-18T15:25:06Z"
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
itspxsh/git-base-pokemon
itspxsh
"2024-06-18T18:22:04Z"
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "git", "text-generation", "generated_from_trainer", "base_model:microsoft/git-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2024-06-18T15:25:34Z"
--- license: mit base_model: microsoft/git-base tags: - generated_from_trainer model-index: - name: git-base-pokemon results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # git-base-pokemon This model is a fine-tuned version of [microsoft/git-base](https://huggingface.co/microsoft/git-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0516 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.9952 | 0.9991 | 562 | 0.0546 | | 0.0469 | 2.0 | 1125 | 0.0516 | | 0.0384 | 2.9991 | 1687 | 0.0505 | | 0.0315 | 4.0 | 2250 | 0.0510 | | 0.0262 | 4.9956 | 2810 | 0.0516 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
cast42/yolov10n_T4_10epoch.pt
cast42
"2024-06-18T15:28:00Z"
0
0
ultralytics
[ "ultralytics", "safetensors", "object-detection", "computer-vision", "yolov10", "dataset:detection-datasets/coco", "arxiv:2405.14458", "license:agpl-3.0", "region:us" ]
object-detection
"2024-06-18T15:25:41Z"
--- license: agpl-3.0 library_name: ultralytics tags: - object-detection - computer-vision - yolov10 datasets: - detection-datasets/coco repo_url: https://github.com/THU-MIG/yolov10 inference: false --- ### Model Description [YOLOv10: Real-Time End-to-End Object Detection](https://arxiv.org/abs/2405.14458v1) - arXiv: https://arxiv.org/abs/2405.14458v1 - github: https://github.com/THU-MIG/yolov10 ### Installation ``` pip install git+https://github.com/THU-MIG/yolov10.git ``` ### Training and validation ```python from ultralytics import YOLOv10 model = YOLOv10.from_pretrained('jameslahm/yolov10n') # Training model.train(...) # after training, one can push to the hub model.push_to_hub("your-hf-username/yolov10-finetuned") # Validation model.val(...) ``` ### Inference Here's an end-to-end example showcasing inference on a cats image: ```python from ultralytics import YOLOv10 model = YOLOv10.from_pretrained('jameslahm/yolov10n') source = 'http://images.cocodataset.org/val2017/000000039769.jpg' model.predict(source=source, save=True) ``` which shows: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/628ece6054698ce61d1e7be3/tBwAsKcQA_96HCYQp7BRr.png) ### BibTeX Entry and Citation Info ``` @article{wang2024yolov10, title={YOLOv10: Real-Time End-to-End Object Detection}, author={Wang, Ao and Chen, Hui and Liu, Lihao and Chen, Kai and Lin, Zijia and Han, Jungong and Ding, Guiguang}, journal={arXiv preprint arXiv:2405.14458}, year={2024} } ```
elozeiri/RoBERTa-Cross-Domain
elozeiri
"2024-06-18T15:56:11Z"
0
0
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-06-18T15:26:17Z"
Entry not found
benjleite/t5-french-qg
benjleite
"2024-06-18T15:36:33Z"
0
0
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "t5s", "french", "text-generation", "question-generation", "fr", "dataset:GEM/FairytaleQA", "dataset:benjleite/FairytaleQA-translated-french", "arxiv:2406.04233", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-06-18T15:26:27Z"
--- language: - fr tags: - t5s - french - text-generation - question-generation datasets: - GEM/FairytaleQA - benjleite/FairytaleQA-translated-french license: apache-2.0 pipeline_tag: text-generation --- # Model Card for t5-french-qg ## Model Description **t5-french-qg** is a T5-based model, fine-tuned from [T5-fr](https://huggingface.co/JDBN/t5-base-fr-qg-fquad) in the **French** [machine-translated version](https://huggingface.co/datasets/benjleite/FairytaleQA-translated-french) of the [original English FairytaleQA dataset](https://huggingface.co/datasets/GEM/FairytaleQA). The task of fine-tuning is Question Generation. You can check our [paper](https://arxiv.org/abs/2406.04233), accepted in ECTEL 2024. ## Training Data **FairytaleQA** is an open-source dataset designed to enhance comprehension of narratives, aimed at students from kindergarten to eighth grade. The dataset is meticulously annotated by education experts following an evidence-based theoretical framework. It comprises 10,580 explicit and implicit questions derived from 278 child-friendly stories, covering seven types of narrative elements or relations. ## Implementation Details The encoder concatenates the answer and text, and the decoder generates the question. We use special labels to differentiate the components. Our maximum token input is set to 512, while the maximum token output is set to 128. During training, the models undergo a maximum of 20 epochs and incorporate early stopping with a patience of 2. A batch size of 16 is employed. During inference, we utilize beam search with a beam width of 5. ## Evaluation - Question Generation | Model | ROUGEL-F1 | | ---------------- | ---------- | | t5 (for original english dataset, baseline) | 0.530 | | t5-french-qg (for the French machine-translated dataset) | 0.404 | ## Load Model and Tokenizer ```py >>> from transformers import T5ForConditionalGeneration, T5Tokenizer >>> model = T5ForConditionalGeneration.from_pretrained("benjleite/t5-french-qg") >>> tokenizer = T5Tokenizer.from_pretrained("JDBN/t5-base-fr-qg-fquad", model_max_length=512) ``` **Important Note**: Special tokens need to be added and model tokens must be resized: ```py >>> tokenizer.add_tokens(['<nar>', '<attribut>', '<question>', '<repondre>', '<typerΓ©ponse>', '<texte>'], special_tokens=True) >>> model.resize_token_embeddings(len(tokenizer)) ``` ## Inference Example (same parameters as used in paper experiments) Note: See our [repository](https://github.com/bernardoleite/fairytaleqa-translated) for additional code details. ```py input_text = '<repondre>' + 'Un Ours.' + '<texte>' + 'Il Γ©tait une fois un ours qui aimait se promener dans la forΓͺt...' source_encoding = tokenizer( input_text, max_length=512, padding='max_length', truncation = 'only_second', return_attention_mask=True, add_special_tokens=True, return_tensors='pt' ) input_ids = source_encoding['input_ids'] attention_mask = source_encoding['attention_mask'] generated_ids = model.generate( input_ids=input_ids, attention_mask=attention_mask, num_return_sequences=1, num_beams=5, max_length=512, repetition_penalty=1.0, length_penalty=1.0, early_stopping=True, use_cache=True ) prediction = { tokenizer.decode(generated_id, skip_special_tokens=False, clean_up_tokenization_spaces=True) for generated_id in generated_ids } generated_str = ''.join(preds) print(generated_str) ``` ## Licensing Information This fine-tuned model is released under the [Apache-2.0 License](http://www.apache.org/licenses/LICENSE-2.0). ## Citation Information Our paper (preprint - accepted for publication at ECTEL 2024): ``` @article{leite_fairytaleqa_translated_2024, title={FairytaleQA Translated: Enabling Educational Question and Answer Generation in Less-Resourced Languages}, author={Bernardo Leite and TomΓ‘s Freitas OsΓ³rio and Henrique Lopes Cardoso}, year={2024}, eprint={2406.04233}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` Original FairytaleQA paper: ``` @inproceedings{xu-etal-2022-fantastic, title = "Fantastic Questions and Where to Find Them: {F}airytale{QA} {--} An Authentic Dataset for Narrative Comprehension", author = "Xu, Ying and Wang, Dakuo and Yu, Mo and Ritchie, Daniel and Yao, Bingsheng and Wu, Tongshuang and Zhang, Zheng and Li, Toby and Bradford, Nora and Sun, Branda and Hoang, Tran and Sang, Yisi and Hou, Yufang and Ma, Xiaojuan and Yang, Diyi and Peng, Nanyun and Yu, Zhou and Warschauer, Mark", editor = "Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline", booktitle = "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = may, year = "2022", address = "Dublin, Ireland", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.acl-long.34", doi = "10.18653/v1/2022.acl-long.34", pages = "447--460", abstract = "Question answering (QA) is a fundamental means to facilitate assessment and training of narrative comprehension skills for both machines and young children, yet there is scarcity of high-quality QA datasets carefully designed to serve this purpose. In particular, existing datasets rarely distinguish fine-grained reading skills, such as the understanding of varying narrative elements. Drawing on the reading education research, we introduce FairytaleQA, a dataset focusing on narrative comprehension of kindergarten to eighth-grade students. Generated by educational experts based on an evidence-based theoretical framework, FairytaleQA consists of 10,580 explicit and implicit questions derived from 278 children-friendly stories, covering seven types of narrative elements or relations. Our dataset is valuable in two folds: First, we ran existing QA models on our dataset and confirmed that this annotation helps assess models{'} fine-grained learning skills. Second, the dataset supports question generation (QG) task in the education domain. Through benchmarking with QG models, we show that the QG model trained on FairytaleQA is capable of asking high-quality and more diverse questions.", } ``` T5-fr model: ``` @misc{github_2020_t5f, author = {Joachim Dublineau}, title = {T5 Question Generation and Question Answering}, year = {2020}, howpublished={\url{https://huggingface.co/JDBN/t5-base-fr-qg-fquad}} } ```
aflah/llama-3-8b-bnb-4bit__Climate-Science-Steps-60__LoRA-Only
aflah
"2024-06-18T15:28:32Z"
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-18T15:28:18Z"
--- base_model: unsloth/llama-3-8b-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl --- # Uploaded model - **Developed by:** aflah - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
unixyhuang/HomeLlama-8B
unixyhuang
"2024-06-19T08:43:56Z"
0
1
transformers
[ "transformers", "safetensors", "code", "question-answering", "en", "dataset:unixyhuang/SmartHome-Device-QA", "license:afl-3.0", "endpoints_compatible", "region:us" ]
question-answering
"2024-06-18T15:29:11Z"
--- library_name: transformers tags: - code license: afl-3.0 datasets: - unixyhuang/SmartHome-Device-QA language: - en pipeline_tag: question-answering --- # Model Card for Model ID This model is trained on **unixyhuang/SmartHome-Device-QA** dataset for smart home assistant usage. The base model is llama-3-8B. The fine-tuning method is QLoRA.
bomjara/book_model_rl
bomjara
"2024-06-18T15:30:21Z"
0
0
null
[ "license:mit", "region:us" ]
null
"2024-06-18T15:30:21Z"
--- license: mit ---
Muriet96/Natalia
Muriet96
"2024-06-18T15:32:12Z"
0
0
null
[ "region:us" ]
null
"2024-06-18T15:32:12Z"
Entry not found
mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF
mradermacher
"2024-06-20T16:04:41Z"
0
1
transformers
[ "transformers", "gguf", "en", "base_model:deepseek-ai/DeepSeek-Coder-V2-Base", "license:other", "endpoints_compatible", "region:us" ]
null
"2024-06-18T15:32:17Z"
--- base_model: deepseek-ai/DeepSeek-Coder-V2-Base language: - en library_name: transformers license: other license_link: LICENSE license_name: deepseek-license quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/deepseek-ai/DeepSeek-Coder-V2-Base <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Base.i1-IQ1_S.gguf) | i1-IQ1_S | 47.5 | for the desperate | | [PART 1](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Base.i1-IQ1_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Base.i1-IQ1_M.gguf.part2of2) | i1-IQ1_M | 52.8 | mostly desperate | | [PART 1](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Base.i1-IQ2_XXS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Base.i1-IQ2_XXS.gguf.part2of2) | i1-IQ2_XXS | 61.6 | | | [PART 1](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Base.i1-IQ2_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Base.i1-IQ2_XS.gguf.part2of2) | i1-IQ2_XS | 68.8 | | | [PART 1](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Base.i1-IQ2_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Base.i1-IQ2_S.gguf.part2of2) | i1-IQ2_S | 70.0 | | | [PART 1](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Base.i1-IQ2_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Base.i1-IQ2_M.gguf.part2of2) | i1-IQ2_M | 77.0 | | | [PART 1](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Base.i1-Q2_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Base.i1-Q2_K.gguf.part2of2) | i1-Q2_K | 86.0 | IQ3_XXS probably better | | [PART 1](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Base.i1-IQ3_XXS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Base.i1-IQ3_XXS.gguf.part2of2) | i1-IQ3_XXS | 90.9 | lower quality | | [PART 1](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Base.i1-IQ3_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Base.i1-IQ3_XS.gguf.part2of2) | i1-IQ3_XS | 96.4 | | | [PART 1](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Base.i1-IQ3_S.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Base.i1-IQ3_S.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Base.i1-IQ3_S.gguf.part3of3) | i1-IQ3_S | 101.8 | beats Q3_K* | | [PART 1](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Base.i1-Q3_K_S.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Base.i1-Q3_K_S.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Base.i1-Q3_K_S.gguf.part3of3) | i1-Q3_K_S | 101.8 | IQ3_XS probably better | | [PART 1](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Base.i1-IQ3_M.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Base.i1-IQ3_M.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Base.i1-IQ3_M.gguf.part3of3) | i1-IQ3_M | 103.5 | | | [PART 1](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Base.i1-Q3_K_M.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Base.i1-Q3_K_M.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Base.i1-Q3_K_M.gguf.part3of3) | i1-Q3_K_M | 112.8 | IQ3_S probably better | | [PART 1](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Base.i1-Q3_K_L.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Base.i1-Q3_K_L.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Base.i1-Q3_K_L.gguf.part3of3) | i1-Q3_K_L | 122.5 | IQ3_M probably better | | [PART 1](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Base.i1-IQ4_XS.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Base.i1-IQ4_XS.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Base.i1-IQ4_XS.gguf.part3of3) | i1-IQ4_XS | 125.7 | | | [PART 1](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Base.i1-Q4_0.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Base.i1-Q4_0.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Base.i1-Q4_0.gguf.part3of3) | i1-Q4_0 | 133.5 | fast, low quality | | [PART 1](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Base.i1-Q4_K_S.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Base.i1-Q4_K_S.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Base.i1-Q4_K_S.gguf.part3of3) | i1-Q4_K_S | 134.0 | optimal size/speed/quality | | [PART 1](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Base.i1-Q4_K_M.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Base.i1-Q4_K_M.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Base.i1-Q4_K_M.gguf.part3of3) | i1-Q4_K_M | 142.6 | fast, recommended | | [PART 1](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Base.i1-Q5_K_S.gguf.part1of4) [PART 2](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Base.i1-Q5_K_S.gguf.part2of4) [PART 3](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Base.i1-Q5_K_S.gguf.part3of4) [PART 4](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Base.i1-Q5_K_S.gguf.part4of4) | i1-Q5_K_S | 162.4 | | | [PART 1](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Base.i1-Q5_K_M.gguf.part1of4) [PART 2](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Base.i1-Q5_K_M.gguf.part2of4) [PART 3](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Base.i1-Q5_K_M.gguf.part3of4) [PART 4](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Base.i1-Q5_K_M.gguf.part4of4) | i1-Q5_K_M | 167.3 | | | [PART 1](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Base.i1-Q6_K.gguf.part1of4) [PART 2](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Base.i1-Q6_K.gguf.part2of4) [PART 3](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Base.i1-Q6_K.gguf.part3of4) [PART 4](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Base.i1-Q6_K.gguf.part4of4) | i1-Q6_K | 193.6 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants. <!-- end -->
XLS/OmniNA-66m
XLS
"2024-06-18T15:40:41Z"
0
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-06-18T15:32:26Z"
--- license: apache-2.0 ---
Prasann15479/miniGrok
Prasann15479
"2024-06-18T15:34:51Z"
0
0
null
[ "license:mit", "region:us" ]
null
"2024-06-18T15:33:56Z"
--- license: mit ---
cadanagn/p
cadanagn
"2024-06-18T15:34:29Z"
0
0
null
[ "region:us" ]
null
"2024-06-18T15:34:29Z"
Entry not found
liho00/omega_agi_model
liho00
"2024-06-18T15:34:35Z"
0
0
null
[ "region:us" ]
null
"2024-06-18T15:34:35Z"
Entry not found
Philophilae/T5-base-FOLIO-fine-tuned
Philophilae
"2024-06-18T15:35:04Z"
0
0
null
[ "region:us" ]
null
"2024-06-18T15:35:04Z"
Entry not found
minhdang1/vit-base-patch16-224-finetuned-context-classifier
minhdang1
"2024-06-18T16:11:35Z"
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:google/vit-base-patch16-224", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
"2024-06-18T15:36:59Z"
--- license: apache-2.0 base_model: google/vit-base-patch16-224 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: vit-base-patch16-224-finetuned-context-classifier results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: test args: default metrics: - name: Accuracy type: accuracy value: 0.8187702265372169 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-patch16-224-finetuned-context-classifier This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.7157 - Accuracy: 0.8188 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 512 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 100 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.3586 | 2.0 | 10 | 1.2322 | 0.3916 | | 1.0841 | 4.0 | 20 | 0.8444 | 0.6958 | | 0.7282 | 6.0 | 30 | 0.5498 | 0.7767 | | 0.4768 | 8.0 | 40 | 0.4273 | 0.8155 | | 0.3332 | 10.0 | 50 | 0.4059 | 0.8220 | | 0.242 | 12.0 | 60 | 0.4272 | 0.8252 | | 0.1737 | 14.0 | 70 | 0.4372 | 0.8188 | | 0.1266 | 16.0 | 80 | 0.4495 | 0.8123 | | 0.1089 | 18.0 | 90 | 0.4877 | 0.8091 | | 0.0837 | 20.0 | 100 | 0.5318 | 0.8058 | | 0.0687 | 22.0 | 110 | 0.5300 | 0.7961 | | 0.0667 | 24.0 | 120 | 0.6253 | 0.7994 | | 0.0581 | 26.0 | 130 | 0.5495 | 0.8220 | | 0.0574 | 28.0 | 140 | 0.5646 | 0.8188 | | 0.0564 | 30.0 | 150 | 0.5990 | 0.8252 | | 0.0492 | 32.0 | 160 | 0.6436 | 0.8155 | | 0.0406 | 34.0 | 170 | 0.6225 | 0.8091 | | 0.0411 | 36.0 | 180 | 0.6168 | 0.8123 | | 0.0381 | 38.0 | 190 | 0.6731 | 0.8123 | | 0.0358 | 40.0 | 200 | 0.6198 | 0.7961 | | 0.0354 | 42.0 | 210 | 0.6216 | 0.8091 | | 0.0358 | 44.0 | 220 | 0.6933 | 0.8091 | | 0.037 | 46.0 | 230 | 0.6488 | 0.8188 | | 0.0344 | 48.0 | 240 | 0.6546 | 0.8220 | | 0.0335 | 50.0 | 250 | 0.6399 | 0.7994 | | 0.0297 | 52.0 | 260 | 0.6553 | 0.8123 | | 0.0318 | 54.0 | 270 | 0.6996 | 0.7896 | | 0.0254 | 56.0 | 280 | 0.6809 | 0.7961 | | 0.0322 | 58.0 | 290 | 0.7048 | 0.7896 | | 0.024 | 60.0 | 300 | 0.6869 | 0.8123 | | 0.0255 | 62.0 | 310 | 0.7099 | 0.8058 | | 0.0266 | 64.0 | 320 | 0.6894 | 0.8091 | | 0.0243 | 66.0 | 330 | 0.7604 | 0.8091 | | 0.0232 | 68.0 | 340 | 0.6983 | 0.8123 | | 0.019 | 70.0 | 350 | 0.6834 | 0.8091 | | 0.0235 | 72.0 | 360 | 0.7102 | 0.8091 | | 0.0262 | 74.0 | 370 | 0.6902 | 0.8155 | | 0.0206 | 76.0 | 380 | 0.6662 | 0.8091 | | 0.0238 | 78.0 | 390 | 0.7109 | 0.8220 | | 0.0202 | 80.0 | 400 | 0.7061 | 0.8058 | | 0.0204 | 82.0 | 410 | 0.7291 | 0.8155 | | 0.0231 | 84.0 | 420 | 0.7103 | 0.8091 | | 0.0217 | 86.0 | 430 | 0.7050 | 0.8123 | | 0.021 | 88.0 | 440 | 0.7037 | 0.8155 | | 0.0207 | 90.0 | 450 | 0.6996 | 0.8058 | | 0.0163 | 92.0 | 460 | 0.7137 | 0.8091 | | 0.0181 | 94.0 | 470 | 0.7153 | 0.8155 | | 0.0225 | 96.0 | 480 | 0.7105 | 0.8123 | | 0.0185 | 98.0 | 490 | 0.7140 | 0.8155 | | 0.0219 | 100.0 | 500 | 0.7157 | 0.8188 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu121 - Datasets 2.14.4 - Tokenizers 0.14.1
Sneha1502/my_awesome_qa_model
Sneha1502
"2024-06-18T15:37:57Z"
0
0
null
[ "region:us" ]
null
"2024-06-18T15:37:57Z"
Entry not found
eroo36/path-to-save-model
eroo36
"2024-06-18T15:38:13Z"
0
0
null
[ "region:us" ]
null
"2024-06-18T15:38:13Z"
Entry not found
Eman90/model
Eman90
"2024-06-18T15:38:21Z"
0
0
null
[ "region:us" ]
null
"2024-06-18T15:38:21Z"
Entry not found
vinhnq29/Llama-3-Instruct-LORA
vinhnq29
"2024-06-18T15:40:08Z"
0
0
peft
[ "peft", "safetensors", "llama", "generated_from_trainer", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "license:llama3", "region:us" ]
null
"2024-06-18T15:38:58Z"
--- license: llama3 library_name: peft tags: - generated_from_trainer base_model: meta-llama/Meta-Llama-3-8B-Instruct model-index: - name: mathvi/output_model2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml base_model: meta-llama/Meta-Llama-3-8B-Instruct model_type: AutoModelForCausalLM tokenizer_type: AutoTokenizer load_in_8bit: false load_in_4bit: false strict: false datasets: - path: /workspace/axolotl/mathvi/input_output_meta_llama_3_8b_instruct-00000-of-00001.parquet type: input_output dataset_prepared_path: val_set_size: 0.05 eval_sample_packing: false output_dir: mathvi/output_model2 sequence_len: 4096 sample_packing: false pad_to_sequence_len: false adapter: lora lora_model_dir: lora_r: 64 lora_alpha: 32 lora_dropout: 0.05 lora_target_linear: true lora_fan_in_fan_out: wandb_project: wandb_entity: wandb_watch: wandb_name: wandb_log_model: gradient_accumulation_steps: 32 micro_batch_size: 4 num_epochs: 3 optimizer: paged_adamw_32bit lr_scheduler: cosine learning_rate: 2e-4 train_on_inputs: false group_by_length: false bf16: auto fp16: tf32: false gradient_checkpointing: false early_stopping_patience: resume_from_checkpoint: local_rank: logging_steps: 1 xformers_attention: flash_attention: true s2_attention: loss_watchdog_threshold: 5.0 loss_watchdog_patience: 3 warmup_steps: 10 evals_per_epoch: 10 eval_table_size: eval_max_new_tokens: 512 saves_per_epoch: 2 save_total_limit: 20 debug: deepspeed: weight_decay: 0.0 fsdp: fsdp_config: special_tokens: pad_token: <|end_of_text|> ``` </details><br> # mathvi/output_model2 This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3327 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 2.0442 | 0.0190 | 1 | 2.0734 | | 1.449 | 0.1137 | 6 | 1.2774 | | 0.8548 | 0.2275 | 12 | 0.9006 | | 0.8561 | 0.3412 | 18 | 0.7924 | | 0.744 | 0.4550 | 24 | 0.7176 | | 0.6752 | 0.5687 | 30 | 0.6603 | | 0.5908 | 0.6825 | 36 | 0.6117 | | 0.5229 | 0.7962 | 42 | 0.5702 | | 0.558 | 0.9100 | 48 | 0.5281 | | 0.4343 | 1.0237 | 54 | 0.4752 | | 0.4039 | 1.1374 | 60 | 0.4152 | | 0.3744 | 1.2512 | 66 | 0.4225 | | 0.3313 | 1.3649 | 72 | 0.3852 | | 0.374 | 1.4787 | 78 | 0.3740 | | 0.3246 | 1.5924 | 84 | 0.3657 | | 0.3392 | 1.7062 | 90 | 0.3591 | | 0.3309 | 1.8199 | 96 | 0.3505 | | 0.3621 | 1.9336 | 102 | 0.3437 | | 0.2819 | 2.0474 | 108 | 0.3416 | | 0.2672 | 2.1611 | 114 | 0.3414 | | 0.2284 | 2.2749 | 120 | 0.3375 | | 0.2836 | 2.3886 | 126 | 0.3353 | | 0.2504 | 2.5024 | 132 | 0.3337 | | 0.2696 | 2.6161 | 138 | 0.3328 | | 0.2775 | 2.7299 | 144 | 0.3327 | | 0.2554 | 2.8436 | 150 | 0.3325 | | 0.2551 | 2.9573 | 156 | 0.3327 | ### Framework versions - PEFT 0.11.1 - Transformers 4.41.1 - Pytorch 2.1.2+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
Beijuka/wav2vec2-large-xls-r-300m-FL-xh-10hr
Beijuka
"2024-06-18T15:39:27Z"
0
0
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-06-18T15:39:24Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
XLS/OmniNA-220m
XLS
"2024-06-18T15:47:47Z"
0
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-06-18T15:39:28Z"
--- license: apache-2.0 ---
kolibree/actor
kolibree
"2024-06-18T15:39:48Z"
0
0
null
[ "region:us" ]
null
"2024-06-18T15:39:48Z"
Entry not found
fernando10/gpt2-yelp_review_full-100000
fernando10
"2024-06-18T15:39:48Z"
0
0
null
[ "region:us" ]
null
"2024-06-18T15:39:48Z"
Entry not found
lucasaltmann/7506195143834
lucasaltmann
"2024-06-18T18:39:03Z"
0
0
ultralytics
[ "ultralytics", "v8", "ultralyticsplus", "yolov8", "yolo", "vision", "image-classification", "pytorch", "model-index", "region:us" ]
image-classification
"2024-06-18T15:40:35Z"
--- tags: - ultralyticsplus - yolov8 - ultralytics - yolo - vision - image-classification - pytorch library_name: ultralytics library_version: 8.0.239 inference: false model-index: - name: lucasaltmann/7506195143834 results: - task: type: image-classification metrics: - type: accuracy value: 1 # min: 0.0 - max: 1.0 name: top1 accuracy - type: accuracy value: 1 # min: 0.0 - max: 1.0 name: top5 accuracy --- <div align="center"> <img width="640" alt="lucasaltmann/7506195143834" src="https://huggingface.co/lucasaltmann/7506195143834/resolve/main/thumbnail.jpg"> </div> ### Supported Labels ``` ['DOWNY'] ``` ### How to use - Install [ultralyticsplus](https://github.com/fcakyon/ultralyticsplus): ```bash pip install ultralyticsplus==0.0.29 ultralytics==8.0.239 ``` - Load model and perform prediction: ```python from ultralyticsplus import YOLO, postprocess_classify_output # load model model = YOLO('lucasaltmann/7506195143834') # set model parameters model.overrides['conf'] = 0.25 # model confidence threshold # set image image = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg' # perform inference results = model.predict(image) # observe results print(results[0].probs) # [0.1, 0.2, 0.3, 0.4] processed_result = postprocess_classify_output(model, result=results[0]) print(processed_result) # {"cat": 0.4, "dog": 0.6} ```
Beijuka/wav2vec2_xls_r_300m_FL_xh_10hr_v1
Beijuka
"2024-06-18T17:28:59Z"
0
0
transformers
[ "transformers", "safetensors", "wav2vec2", "automatic-speech-recognition", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
"2024-06-18T15:41:44Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
arshiaez/hubbert
arshiaez
"2024-06-18T15:42:27Z"
0
0
null
[ "region:us" ]
null
"2024-06-18T15:42:27Z"
Entry not found
Dongwookss/big_fut_final
Dongwookss
"2024-06-19T00:26:04Z"
0
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "unsloth", "trl", "sft", "conversational", "ko", "dataset:mintaeng/llm_futsaldata_yo", "arxiv:1910.09700", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-06-18T15:42:28Z"
--- library_name: transformers tags: - unsloth - trl - sft datasets: - mintaeng/llm_futsaldata_yo license: apache-2.0 language: - ko --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> - train for 7h23m - ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** Dongwookss - **Model type:** [More Information Needed] - **Language(s) (NLP):** Korean - **Finetuned from model :** HuggingFaceH4/zephyr-7b-beta ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
kathija98/Meta-Llama-3-8B-text-to-sql
kathija98
"2024-06-18T15:42:31Z"
0
0
null
[ "license:apache-2.0", "region:us" ]
null
"2024-06-18T15:42:31Z"
--- license: apache-2.0 ---
Abeee/actor
Abeee
"2024-06-18T15:42:31Z"
0
0
null
[ "region:us" ]
null
"2024-06-18T15:42:31Z"
Entry not found