modelId
stringlengths
5
122
author
stringlengths
2
42
last_modified
unknown
downloads
int64
0
738M
likes
int64
0
11k
library_name
stringclasses
245 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
48 values
createdAt
unknown
card
stringlengths
1
901k
ballelakha/code-llama-7b-text-to-sql
ballelakha
"2024-06-23T19:18:06Z"
0
0
peft
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "dataset:generator", "base_model:codellama/CodeLlama-7b-hf", "license:llama2", "region:us" ]
null
"2024-06-23T11:25:48Z"
--- base_model: codellama/CodeLlama-7b-hf datasets: - generator library_name: peft license: llama2 tags: - trl - sft - generated_from_trainer model-index: - name: code-llama-7b-text-to-sql results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # code-llama-7b-text-to-sql This model is a fine-tuned version of [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf) on the generator dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 3 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 6 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 3 ### Training results ### Framework versions - PEFT 0.7.2.dev0 - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.16.1 - Tokenizers 0.15.2
SwimChoi/villama2-7b-chat-Albania-lora
SwimChoi
"2024-06-23T11:27:39Z"
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:meta-llama/Llama-2-7b-chat-hf", "region:us" ]
null
"2024-06-23T11:27:37Z"
--- library_name: peft base_model: meta-llama/Llama-2-7b-chat-hf --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.10.1.dev0
Mesutby/mistral-7b-wiki-llama2-13b-translate
Mesutby
"2024-06-23T11:31:06Z"
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-06-23T11:30:49Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
SwimChoi/villama2-7b-chat-Finland-lora
SwimChoi
"2024-06-23T11:31:34Z"
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:meta-llama/Llama-2-7b-chat-hf", "region:us" ]
null
"2024-06-23T11:31:32Z"
--- library_name: peft base_model: meta-llama/Llama-2-7b-chat-hf --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.10.1.dev0
SwimChoi/villama2-7b-chat-Spain-lora
SwimChoi
"2024-06-23T11:32:53Z"
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:meta-llama/Llama-2-7b-chat-hf", "region:us" ]
null
"2024-06-23T11:32:50Z"
--- library_name: peft base_model: meta-llama/Llama-2-7b-chat-hf --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.10.1.dev0
damgomz/ft_32_2e6_x1
damgomz
"2024-06-24T06:31:18Z"
0
0
transformers
[ "transformers", "safetensors", "albert", "text-classification", "en", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-06-23T11:33:02Z"
--- language: en tags: - text-classification pipeline_tag: text-classification widget: - text: GEPS Techno is the pioneer of hybridization of renewable energies at sea. We imagine, design and commercialize innovative off-grid systems that aim to generate power at sea, stabilize and collect data. The success of our low power platforms WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity platform. --- ## Environmental Impact (CODE CARBON DEFAULT) | Metric | Value | |--------------------------|---------------------------------| | Duration (in seconds) | 73133.42948484421 | | Emissions (Co2eq in kg) | 0.0442541526321102 | | CPU power (W) | 42.5 | | GPU power (W) | [No GPU] | | RAM power (W) | 3.75 | | CPU energy (kWh) | 0.8633788051742626 | | GPU energy (kWh) | [No GPU] | | RAM energy (kWh) | 0.0761798588054874 | | Consumed energy (kWh) | 0.939558663979752 | | Country name | Switzerland | | Cloud provider | nan | | Cloud region | nan | | CPU count | 2 | | CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz | | GPU count | nan | | GPU model | nan | ## Environmental Impact (for one core) | Metric | Value | |--------------------------|---------------------------------| | CPU energy (kWh) | 0.1407818517583251 | | Emissions (Co2eq in kg) | 0.028643926548230645 | ## Note 19 juin 2024 ## My Config | Config | Value | |--------------------------|-----------------| | checkpoint | damgomz/ThunBERT_bs16_lr5_MLM | | model_name | ft_32_2e6_x1 | | sequence_length | 400 | | num_epoch | 6 | | learning_rate | 2e-06 | | batch_size | 32 | | weight_decay | 0.0 | | warm_up_prop | 0.0 | | drop_out_prob | 0.1 | | packing_length | 100 | | train_test_split | 0.2 | | num_steps | 29328 | ## Training and Testing steps Epoch | Train Loss | Test Loss | F-beta Score ---|---|---|--- | 0 | 0.000000 | 0.696710 | 0.460676 | | 1 | 0.476093 | 0.341695 | 0.902292 | | 2 | 0.273975 | 0.250606 | 0.931251 | | 3 | 0.207042 | 0.217707 | 0.931069 | | 4 | 0.174314 | 0.209037 | 0.920646 | | 5 | 0.151357 | 0.201929 | 0.929899 | | 6 | 0.132784 | 0.203329 | 0.922624 |
damgomz/ft_32_15e6_base_x1
damgomz
"2024-06-24T06:00:46Z"
0
0
transformers
[ "transformers", "safetensors", "albert", "text-classification", "en", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-06-23T11:33:05Z"
--- language: en tags: - text-classification pipeline_tag: text-classification widget: - text: GEPS Techno is the pioneer of hybridization of renewable energies at sea. We imagine, design and commercialize innovative off-grid systems that aim to generate power at sea, stabilize and collect data. The success of our low power platforms WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity platform. --- ## Environmental Impact (CODE CARBON DEFAULT) | Metric | Value | |--------------------------|---------------------------------| | Duration (in seconds) | 71300.66345405579 | | Emissions (Co2eq in kg) | 0.0431451187442411 | | CPU power (W) | 42.5 | | GPU power (W) | [No GPU] | | RAM power (W) | 3.75 | | CPU energy (kWh) | 0.8417420373997773 | | GPU energy (kWh) | [No GPU] | | RAM energy (kWh) | 0.0742707594461738 | | Consumed energy (kWh) | 0.9160127968459512 | | Country name | Switzerland | | Cloud provider | nan | | Cloud region | nan | | CPU count | 2 | | CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz | | GPU count | nan | | GPU model | nan | ## Environmental Impact (for one core) | Metric | Value | |--------------------------|---------------------------------| | CPU energy (kWh) | 0.13725377714905737 | | Emissions (Co2eq in kg) | 0.027926093186171848 | ## Note 19 juin 2024 ## My Config | Config | Value | |--------------------------|-----------------| | checkpoint | albert-base-v2 | | model_name | ft_32_15e6_base_x1 | | sequence_length | 400 | | num_epoch | 6 | | learning_rate | 1.5e-05 | | batch_size | 32 | | weight_decay | 0.0 | | warm_up_prop | 0.0 | | drop_out_prob | 0.1 | | packing_length | 100 | | train_test_split | 0.2 | | num_steps | 29328 | ## Training and Testing steps Epoch | Train Loss | Test Loss | F-beta Score ---|---|---|--- | 0 | 0.000000 | 0.750938 | 0.811297 | | 1 | 0.313397 | 0.223373 | 0.904675 | | 2 | 0.188140 | 0.206473 | 0.915484 | | 3 | 0.135871 | 0.224724 | 0.933423 | | 4 | 0.094995 | 0.255774 | 0.916496 | | 5 | 0.068506 | 0.282793 | 0.917980 | | 6 | 0.043524 | 0.319166 | 0.906638 |
damgomz/ft_32_8e6_x1
damgomz
"2024-06-24T05:52:32Z"
0
0
transformers
[ "transformers", "safetensors", "albert", "text-classification", "en", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-06-23T11:33:47Z"
--- language: en tags: - text-classification pipeline_tag: text-classification widget: - text: GEPS Techno is the pioneer of hybridization of renewable energies at sea. We imagine, design and commercialize innovative off-grid systems that aim to generate power at sea, stabilize and collect data. The success of our low power platforms WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity platform. --- ## Environmental Impact (CODE CARBON DEFAULT) | Metric | Value | |--------------------------|---------------------------------| | Duration (in seconds) | 70807.24902772903 | | Emissions (Co2eq in kg) | 0.0428465486543808 | | CPU power (W) | 42.5 | | GPU power (W) | [No GPU] | | RAM power (W) | 3.75 | | CPU energy (kWh) | 0.8359170416593541 | | GPU energy (kWh) | [No GPU] | | RAM energy (kWh) | 0.0737568220458926 | | Consumed energy (kWh) | 0.9096738637052478 | | Country name | Switzerland | | Cloud provider | nan | | Cloud region | nan | | CPU count | 2 | | CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz | | GPU count | nan | | GPU model | nan | ## Environmental Impact (for one core) | Metric | Value | |--------------------------|---------------------------------| | CPU energy (kWh) | 0.1363039543783784 | | Emissions (Co2eq in kg) | 0.027732839202527206 | ## Note 19 juin 2024 ## My Config | Config | Value | |--------------------------|-----------------| | checkpoint | damgomz/ThunBERT_bs16_lr5_MLM | | model_name | ft_32_8e6_x1 | | sequence_length | 400 | | num_epoch | 6 | | learning_rate | 8e-06 | | batch_size | 32 | | weight_decay | 0.0 | | warm_up_prop | 0.0 | | drop_out_prob | 0.1 | | packing_length | 100 | | train_test_split | 0.2 | | num_steps | 29328 | ## Training and Testing steps Epoch | Train Loss | Test Loss | F-beta Score ---|---|---|--- | 0 | 0.000000 | 0.713561 | 0.452870 | | 1 | 0.327850 | 0.219759 | 0.936395 | | 2 | 0.172179 | 0.188669 | 0.931224 | | 3 | 0.126528 | 0.210671 | 0.922720 | | 4 | 0.083892 | 0.225833 | 0.918645 | | 5 | 0.046991 | 0.259591 | 0.919505 | | 6 | 0.025887 | 0.288153 | 0.925096 |
damgomz/ft_32_9e6_x8
damgomz
"2024-06-24T05:51:31Z"
0
0
transformers
[ "transformers", "safetensors", "albert", "text-classification", "en", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-06-23T11:33:48Z"
--- language: en tags: - text-classification pipeline_tag: text-classification widget: - text: GEPS Techno is the pioneer of hybridization of renewable energies at sea. We imagine, design and commercialize innovative off-grid systems that aim to generate power at sea, stabilize and collect data. The success of our low power platforms WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity platform. --- ## Environmental Impact (CODE CARBON DEFAULT) | Metric | Value | |--------------------------|---------------------------------| | Duration (in seconds) | 70746.0561144352 | | Emissions (Co2eq in kg) | 0.0428095242767772 | | CPU power (W) | 42.5 | | GPU power (W) | [No GPU] | | RAM power (W) | 3.75 | | CPU energy (kWh) | 0.8351947025140107 | | GPU energy (kWh) | [No GPU] | | RAM energy (kWh) | 0.0736930976765851 | | Consumed energy (kWh) | 0.9088878001905956 | | Country name | Switzerland | | Cloud provider | nan | | Cloud region | nan | | CPU count | 2 | | CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz | | GPU count | nan | | GPU model | nan | ## Environmental Impact (for one core) | Metric | Value | |--------------------------|---------------------------------| | CPU energy (kWh) | 0.13618615802028777 | | Emissions (Co2eq in kg) | 0.027708871978153783 | ## Note 19 juin 2024 ## My Config | Config | Value | |--------------------------|-----------------| | checkpoint | damgomz/fp_bs16_lr5_x8 | | model_name | ft_32_9e6_x8 | | sequence_length | 400 | | num_epoch | 6 | | learning_rate | 9e-06 | | batch_size | 32 | | weight_decay | 0.0 | | warm_up_prop | 0.0 | | drop_out_prob | 0.1 | | packing_length | 100 | | train_test_split | 0.2 | | num_steps | 29328 | ## Training and Testing steps Epoch | Train Loss | Test Loss | F-beta Score ---|---|---|--- | 0 | 0.000000 | 0.694762 | 0.553192 | | 1 | 0.332558 | 0.236487 | 0.930078 | | 2 | 0.188264 | 0.218494 | 0.908930 | | 3 | 0.141132 | 0.237576 | 0.909407 | | 4 | 0.097378 | 0.287268 | 0.900907 | | 5 | 0.058418 | 0.308325 | 0.917448 | | 6 | 0.035490 | 0.340983 | 0.917140 |
damgomz/ft_32_8e6_base_x1
damgomz
"2024-06-24T05:57:06Z"
0
0
transformers
[ "transformers", "safetensors", "albert", "text-classification", "en", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-06-23T11:33:50Z"
--- language: en tags: - text-classification pipeline_tag: text-classification widget: - text: GEPS Techno is the pioneer of hybridization of renewable energies at sea. We imagine, design and commercialize innovative off-grid systems that aim to generate power at sea, stabilize and collect data. The success of our low power platforms WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity platform. --- ## Environmental Impact (CODE CARBON DEFAULT) | Metric | Value | |--------------------------|---------------------------------| | Duration (in seconds) | 71081.5351600647 | | Emissions (Co2eq in kg) | 0.0430125248757196 | | CPU power (W) | 42.5 | | GPU power (W) | [No GPU] | | RAM power (W) | 3.75 | | CPU energy (kWh) | 0.8391551482164213 | | GPU energy (kWh) | [No GPU] | | RAM energy (kWh) | 0.0740425519538421 | | Consumed energy (kWh) | 0.913197700170265 | | Country name | Switzerland | | Cloud provider | nan | | Cloud region | nan | | CPU count | 2 | | CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz | | GPU count | nan | | GPU model | nan | ## Environmental Impact (for one core) | Metric | Value | |--------------------------|---------------------------------| | CPU energy (kWh) | 0.13683195518312455 | | Emissions (Co2eq in kg) | 0.027840267937692002 | ## Note 19 juin 2024 ## My Config | Config | Value | |--------------------------|-----------------| | checkpoint | albert-base-v2 | | model_name | ft_32_8e6_base_x1 | | sequence_length | 400 | | num_epoch | 6 | | learning_rate | 8e-06 | | batch_size | 32 | | weight_decay | 0.0 | | warm_up_prop | 0.0 | | drop_out_prob | 0.1 | | packing_length | 100 | | train_test_split | 0.2 | | num_steps | 29328 | ## Training and Testing steps Epoch | Train Loss | Test Loss | F-beta Score ---|---|---|--- | 0 | 0.000000 | 0.705991 | 0.500668 | | 1 | 0.355747 | 0.272755 | 0.915741 | | 2 | 0.199433 | 0.224355 | 0.904226 | | 3 | 0.143276 | 0.215985 | 0.923839 | | 4 | 0.099569 | 0.237419 | 0.930634 | | 5 | 0.062744 | 0.262038 | 0.916895 | | 6 | 0.035835 | 0.304088 | 0.925675 |
damgomz/ft_32_8e6_x2
damgomz
"2024-06-24T05:56:21Z"
0
0
transformers
[ "transformers", "safetensors", "albert", "text-classification", "en", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-06-23T11:33:58Z"
--- language: en tags: - text-classification pipeline_tag: text-classification widget: - text: GEPS Techno is the pioneer of hybridization of renewable energies at sea. We imagine, design and commercialize innovative off-grid systems that aim to generate power at sea, stabilize and collect data. The success of our low power platforms WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity platform. --- ## Environmental Impact (CODE CARBON DEFAULT) | Metric | Value | |--------------------------|---------------------------------| | Duration (in seconds) | 71036.1094212532 | | Emissions (Co2eq in kg) | 0.0429850333260201 | | CPU power (W) | 42.5 | | GPU power (W) | [No GPU] | | RAM power (W) | 3.75 | | CPU energy (kWh) | 0.8386187978626949 | | GPU energy (kWh) | [No GPU] | | RAM energy (kWh) | 0.0739952299927672 | | Consumed energy (kWh) | 0.9126140278554624 | | Country name | Switzerland | | Cloud provider | nan | | Cloud region | nan | | CPU count | 2 | | CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz | | GPU count | nan | | GPU model | nan | ## Environmental Impact (for one core) | Metric | Value | |--------------------------|---------------------------------| | CPU energy (kWh) | 0.1367445106359124 | | Emissions (Co2eq in kg) | 0.027822476189990838 | ## Note 19 juin 2024 ## My Config | Config | Value | |--------------------------|-----------------| | checkpoint | damgomz/fp_bs16_lr5_x2 | | model_name | ft_32_8e6_x2 | | sequence_length | 400 | | num_epoch | 6 | | learning_rate | 8e-06 | | batch_size | 32 | | weight_decay | 0.0 | | warm_up_prop | 0.0 | | drop_out_prob | 0.1 | | packing_length | 100 | | train_test_split | 0.2 | | num_steps | 29328 | ## Training and Testing steps Epoch | Train Loss | Test Loss | F-beta Score ---|---|---|--- | 0 | 0.000000 | 0.700990 | 0.499034 | | 1 | 0.346719 | 0.226252 | 0.900140 | | 2 | 0.175822 | 0.196907 | 0.930378 | | 3 | 0.129350 | 0.202716 | 0.927310 | | 4 | 0.081247 | 0.232037 | 0.930079 | | 5 | 0.042553 | 0.288999 | 0.918371 | | 6 | 0.023308 | 0.342417 | 0.918879 |
damgomz/ft_32_8e6_base_x2
damgomz
"2024-06-24T05:57:42Z"
0
0
transformers
[ "transformers", "safetensors", "albert", "text-classification", "en", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-06-23T11:34:01Z"
--- language: en tags: - text-classification pipeline_tag: text-classification widget: - text: GEPS Techno is the pioneer of hybridization of renewable energies at sea. We imagine, design and commercialize innovative off-grid systems that aim to generate power at sea, stabilize and collect data. The success of our low power platforms WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity platform. --- ## Environmental Impact (CODE CARBON DEFAULT) | Metric | Value | |--------------------------|---------------------------------| | Duration (in seconds) | 71115.1747097969 | | Emissions (Co2eq in kg) | 0.0430328800833591 | | CPU power (W) | 42.5 | | GPU power (W) | [No GPU] | | RAM power (W) | 3.75 | | CPU energy (kWh) | 0.839552293203108 | | GPU energy (kWh) | [No GPU] | | RAM energy (kWh) | 0.0740775678053499 | | Consumed energy (kWh) | 0.9136298610084536 | | Country name | Switzerland | | Cloud provider | nan | | Cloud region | nan | | CPU count | 2 | | CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz | | GPU count | nan | | GPU model | nan | ## Environmental Impact (for one core) | Metric | Value | |--------------------------|---------------------------------| | CPU energy (kWh) | 0.13689671131635903 | | Emissions (Co2eq in kg) | 0.027853443428003787 | ## Note 19 juin 2024 ## My Config | Config | Value | |--------------------------|-----------------| | checkpoint | albert-base-v2 | | model_name | ft_32_8e6_base_x2 | | sequence_length | 400 | | num_epoch | 6 | | learning_rate | 8e-06 | | batch_size | 32 | | weight_decay | 0.0 | | warm_up_prop | 0.0 | | drop_out_prob | 0.1 | | packing_length | 100 | | train_test_split | 0.2 | | num_steps | 29328 | ## Training and Testing steps Epoch | Train Loss | Test Loss | F-beta Score ---|---|---|--- | 0 | 0.000000 | 0.712955 | 0.163837 | | 1 | 0.334197 | 0.228866 | 0.922535 | | 2 | 0.192102 | 0.218541 | 0.927719 | | 3 | 0.136476 | 0.223534 | 0.921437 | | 4 | 0.088440 | 0.291644 | 0.894339 | | 5 | 0.045563 | 0.311659 | 0.915801 | | 6 | 0.025512 | 0.358274 | 0.919511 |
damgomz/ft_32_9e6_base_x8
damgomz
"2024-06-24T05:59:37Z"
0
0
transformers
[ "transformers", "safetensors", "albert", "text-classification", "en", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-06-23T11:34:09Z"
--- language: en tags: - text-classification pipeline_tag: text-classification widget: - text: GEPS Techno is the pioneer of hybridization of renewable energies at sea. We imagine, design and commercialize innovative off-grid systems that aim to generate power at sea, stabilize and collect data. The success of our low power platforms WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity platform. --- ## Environmental Impact (CODE CARBON DEFAULT) | Metric | Value | |--------------------------|---------------------------------| | Duration (in seconds) | 71231.60269165039 | | Emissions (Co2eq in kg) | 0.0431033321668903 | | CPU power (W) | 42.5 | | GPU power (W) | [No GPU] | | RAM power (W) | 3.75 | | CPU energy (kWh) | 0.8409267483289051 | | GPU energy (kWh) | [No GPU] | | RAM energy (kWh) | 0.0741988788741331 | | Consumed energy (kWh) | 0.915125627203039 | | Country name | Switzerland | | Cloud provider | nan | | Cloud region | nan | | CPU count | 2 | | CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz | | GPU count | nan | | GPU model | nan | ## Environmental Impact (for one core) | Metric | Value | |--------------------------|---------------------------------| | CPU energy (kWh) | 0.137120835181427 | | Emissions (Co2eq in kg) | 0.02789904438756307 | ## Note 19 juin 2024 ## My Config | Config | Value | |--------------------------|-----------------| | checkpoint | albert-base-v2 | | model_name | ft_32_9e6_base_x8 | | sequence_length | 400 | | num_epoch | 6 | | learning_rate | 9e-06 | | batch_size | 32 | | weight_decay | 0.0 | | warm_up_prop | 0.0 | | drop_out_prob | 0.1 | | packing_length | 100 | | train_test_split | 0.2 | | num_steps | 29328 | ## Training and Testing steps Epoch | Train Loss | Test Loss | F-beta Score ---|---|---|--- | 0 | 0.000000 | 0.728312 | 0.166667 | | 1 | 0.345853 | 0.236986 | 0.914687 | | 2 | 0.219481 | 0.244534 | 0.912125 | | 3 | 0.169655 | 0.241377 | 0.921095 | | 4 | 0.133692 | 0.271366 | 0.890746 | | 5 | 0.098565 | 0.291976 | 0.900418 | | 6 | 0.067091 | 0.302501 | 0.898679 |
damgomz/ft_32_9e6_x12
damgomz
"2024-06-24T02:21:40Z"
0
0
transformers
[ "transformers", "safetensors", "albert", "text-classification", "en", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-06-23T11:34:45Z"
--- language: en tags: - text-classification pipeline_tag: text-classification widget: - text: GEPS Techno is the pioneer of hybridization of renewable energies at sea. We imagine, design and commercialize innovative off-grid systems that aim to generate power at sea, stabilize and collect data. The success of our low power platforms WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity platform. --- ## Environmental Impact (CODE CARBON DEFAULT) | Metric | Value | |--------------------------|---------------------------------| | Duration (in seconds) | [More Information Needed] | | Emissions (Co2eq in kg) | [More Information Needed] | | CPU power (W) | [NO CPU] | | GPU power (W) | [No GPU] | | RAM power (W) | [More Information Needed] | | CPU energy (kWh) | [No CPU] | | GPU energy (kWh) | [No GPU] | | RAM energy (kWh) | [More Information Needed] | | Consumed energy (kWh) | [More Information Needed] | | Country name | [More Information Needed] | | Cloud provider | [No Cloud] | | Cloud region | [No Cloud] | | CPU count | [No CPU] | | CPU model | [No CPU] | | GPU count | [No GPU] | | GPU model | [No GPU] | ## Environmental Impact (for one core) | Metric | Value | |--------------------------|---------------------------------| | CPU energy (kWh) | [No CPU] | | Emissions (Co2eq in kg) | [More Information Needed] | ## Note 19 juin 2024 ## My Config | Config | Value | |--------------------------|-----------------| | checkpoint | damgomz/fp_bs16_lr5_x12 | | model_name | ft_32_9e6_x12 | | sequence_length | 400 | | num_epoch | 6 | | learning_rate | 9e-06 | | batch_size | 32 | | weight_decay | 0.0 | | warm_up_prop | 0.0 | | drop_out_prob | 0.1 | | packing_length | 100 | | train_test_split | 0.2 | | num_steps | 29328 | ## Training and Testing steps Epoch | Train Loss | Test Loss | F-beta Score ---|---|---|--- | 0 | 0.000000 | 0.718000 | 0.477926 | | 1 | 0.331600 | 0.239242 | 0.921343 | | 2 | 0.194133 | 0.212801 | 0.929959 | | 3 | 0.148132 | 0.219373 | 0.922885 | | 4 | 0.103755 | 0.232002 | 0.920441 | | 5 | 0.067467 | 0.289143 | 0.920952 | | 6 | 0.040703 | 0.313676 | 0.927784 |
damgomz/ft_32_4e6_base_x8
damgomz
"2024-06-24T06:23:57Z"
0
0
transformers
[ "transformers", "safetensors", "albert", "text-classification", "en", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-06-23T11:35:01Z"
--- language: en tags: - text-classification pipeline_tag: text-classification widget: - text: GEPS Techno is the pioneer of hybridization of renewable energies at sea. We imagine, design and commercialize innovative off-grid systems that aim to generate power at sea, stabilize and collect data. The success of our low power platforms WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity platform. --- ## Environmental Impact (CODE CARBON DEFAULT) | Metric | Value | |--------------------------|---------------------------------| | Duration (in seconds) | 72691.67709064484 | | Emissions (Co2eq in kg) | 0.0439868478257266 | | CPU power (W) | 42.5 | | GPU power (W) | [No GPU] | | RAM power (W) | 3.75 | | CPU energy (kWh) | 0.8581637735386716 | | GPU energy (kWh) | [No GPU] | | RAM energy (kWh) | 0.0757197496503593 | | Consumed energy (kWh) | 0.9338835231890332 | | Country name | Switzerland | | Cloud provider | nan | | Cloud region | nan | | CPU count | 2 | | CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz | | GPU count | nan | | GPU model | nan | ## Environmental Impact (for one core) | Metric | Value | |--------------------------|---------------------------------| | CPU energy (kWh) | 0.1399314783994913 | | Emissions (Co2eq in kg) | 0.02847090686050256 | ## Note 19 juin 2024 ## My Config | Config | Value | |--------------------------|-----------------| | checkpoint | albert-base-v2 | | model_name | ft_32_4e6_base_x8 | | sequence_length | 400 | | num_epoch | 6 | | learning_rate | 4e-06 | | batch_size | 32 | | weight_decay | 0.0 | | warm_up_prop | 0.0 | | drop_out_prob | 0.1 | | packing_length | 100 | | train_test_split | 0.2 | | num_steps | 29328 | ## Training and Testing steps Epoch | Train Loss | Test Loss | F-beta Score ---|---|---|--- | 0 | 0.000000 | 0.764698 | 0.508976 | | 1 | 0.378353 | 0.279458 | 0.901742 | | 2 | 0.240503 | 0.258632 | 0.891390 | | 3 | 0.198528 | 0.230921 | 0.910689 | | 4 | 0.172870 | 0.249744 | 0.908769 | | 5 | 0.146713 | 0.229990 | 0.928140 | | 6 | 0.123440 | 0.235008 | 0.912746 |
damgomz/ft_32_1e6_x8
damgomz
"2024-06-24T06:57:03Z"
0
0
transformers
[ "transformers", "safetensors", "albert", "text-classification", "en", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-06-23T11:35:14Z"
--- language: en tags: - text-classification pipeline_tag: text-classification widget: - text: GEPS Techno is the pioneer of hybridization of renewable energies at sea. We imagine, design and commercialize innovative off-grid systems that aim to generate power at sea, stabilize and collect data. The success of our low power platforms WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity platform. --- ## Environmental Impact (CODE CARBON DEFAULT) | Metric | Value | |--------------------------|---------------------------------| | Duration (in seconds) | 74678.3544178009 | | Emissions (Co2eq in kg) | 0.0451890062334828 | | CPU power (W) | 42.5 | | GPU power (W) | [No GPU] | | RAM power (W) | 3.75 | | CPU energy (kWh) | 0.8816173877004104 | | GPU energy (kWh) | [No GPU] | | RAM energy (kWh) | 0.0777891266725958 | | Consumed energy (kWh) | 0.9594065143730034 | | Country name | Switzerland | | Cloud provider | nan | | Cloud region | nan | | CPU count | 2 | | CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz | | GPU count | nan | | GPU model | nan | ## Environmental Impact (for one core) | Metric | Value | |--------------------------|---------------------------------| | CPU energy (kWh) | 0.14375583225426672 | | Emissions (Co2eq in kg) | 0.029249022146972017 | ## Note 19 juin 2024 ## My Config | Config | Value | |--------------------------|-----------------| | checkpoint | damgomz/fp_bs16_lr5_x8 | | model_name | ft_32_1e6_x8 | | sequence_length | 400 | | num_epoch | 6 | | learning_rate | 1e-06 | | batch_size | 32 | | weight_decay | 0.0 | | warm_up_prop | 0.0 | | drop_out_prob | 0.1 | | packing_length | 100 | | train_test_split | 0.2 | | num_steps | 29328 | ## Training and Testing steps Epoch | Train Loss | Test Loss | F-beta Score ---|---|---|--- | 0 | 0.000000 | 0.699315 | 0.666534 | | 1 | 0.576768 | 0.405692 | 0.874298 | | 2 | 0.331982 | 0.305439 | 0.901113 | | 3 | 0.269726 | 0.265558 | 0.894345 | | 4 | 0.240578 | 0.251993 | 0.904761 | | 5 | 0.221913 | 0.251250 | 0.899448 | | 6 | 0.206690 | 0.238627 | 0.902659 |
damgomz/ft_32_14e6_x8
damgomz
"2024-06-24T06:40:31Z"
0
0
transformers
[ "transformers", "safetensors", "albert", "text-classification", "en", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-06-23T11:35:18Z"
--- language: en tags: - text-classification pipeline_tag: text-classification widget: - text: GEPS Techno is the pioneer of hybridization of renewable energies at sea. We imagine, design and commercialize innovative off-grid systems that aim to generate power at sea, stabilize and collect data. The success of our low power platforms WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity platform. --- ## Environmental Impact (CODE CARBON DEFAULT) | Metric | Value | |--------------------------|---------------------------------| | Duration (in seconds) | 73686.00679969788 | | Emissions (Co2eq in kg) | 0.0445885228116967 | | CPU power (W) | 42.5 | | GPU power (W) | [No GPU] | | RAM power (W) | 3.75 | | CPU energy (kWh) | 0.8699022043281097 | | GPU energy (kWh) | [No GPU] | | RAM energy (kWh) | 0.0767554634856682 | | Consumed energy (kWh) | 0.9466576678137772 | | Country name | Switzerland | | Cloud provider | nan | | Cloud region | nan | | CPU count | 2 | | CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz | | GPU count | nan | | GPU model | nan | ## Environmental Impact (for one core) | Metric | Value | |--------------------------|---------------------------------| | CPU energy (kWh) | 0.14184556308941842 | | Emissions (Co2eq in kg) | 0.028860352663215003 | ## Note 19 juin 2024 ## My Config | Config | Value | |--------------------------|-----------------| | checkpoint | damgomz/fp_bs16_lr5_x8 | | model_name | ft_32_14e6_x8 | | sequence_length | 400 | | num_epoch | 6 | | learning_rate | 1.4e-05 | | batch_size | 32 | | weight_decay | 0.0 | | warm_up_prop | 0.0 | | drop_out_prob | 0.1 | | packing_length | 100 | | train_test_split | 0.2 | | num_steps | 29328 | ## Training and Testing steps Epoch | Train Loss | Test Loss | F-beta Score ---|---|---|--- | 0 | 0.000000 | 0.706377 | 0.300975 | | 1 | 0.304260 | 0.217328 | 0.933274 | | 2 | 0.173123 | 0.211620 | 0.926305 | | 3 | 0.120896 | 0.245777 | 0.910319 | | 4 | 0.072855 | 0.317895 | 0.899215 | | 5 | 0.040877 | 0.361288 | 0.906391 | | 6 | 0.027597 | 0.408333 | 0.890392 |
SwimChoi/villama2-7b-chat-Hungary-lora
SwimChoi
"2024-06-23T11:35:31Z"
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:meta-llama/Llama-2-7b-chat-hf", "region:us" ]
null
"2024-06-23T11:35:29Z"
--- library_name: peft base_model: meta-llama/Llama-2-7b-chat-hf --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.10.1.dev0
damgomz/ft_32_14e6_base_x8
damgomz
"2024-06-24T06:38:12Z"
0
0
transformers
[ "transformers", "safetensors", "albert", "text-classification", "en", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-06-23T11:35:35Z"
--- language: en tags: - text-classification pipeline_tag: text-classification widget: - text: GEPS Techno is the pioneer of hybridization of renewable energies at sea. We imagine, design and commercialize innovative off-grid systems that aim to generate power at sea, stabilize and collect data. The success of our low power platforms WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity platform. --- ## Environmental Impact (CODE CARBON DEFAULT) | Metric | Value | |--------------------------|---------------------------------| | Duration (in seconds) | 73546.51729655266 | | Emissions (Co2eq in kg) | 0.0445041229725192 | | CPU power (W) | 42.5 | | GPU power (W) | [No GPU] | | RAM power (W) | 3.75 | | CPU energy (kWh) | 0.8682555832811519 | | GPU energy (kWh) | [No GPU] | | RAM energy (kWh) | 0.0766101939320565 | | Consumed energy (kWh) | 0.944865777213206 | | Country name | Switzerland | | Cloud provider | nan | | Cloud region | nan | | CPU count | 2 | | CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz | | GPU count | nan | | GPU model | nan | ## Environmental Impact (for one core) | Metric | Value | |--------------------------|---------------------------------| | CPU energy (kWh) | 0.14157704579586386 | | Emissions (Co2eq in kg) | 0.028805719274483124 | ## Note 19 juin 2024 ## My Config | Config | Value | |--------------------------|-----------------| | checkpoint | albert-base-v2 | | model_name | ft_32_14e6_base_x8 | | sequence_length | 400 | | num_epoch | 6 | | learning_rate | 1.4e-05 | | batch_size | 32 | | weight_decay | 0.0 | | warm_up_prop | 0.0 | | drop_out_prob | 0.1 | | packing_length | 100 | | train_test_split | 0.2 | | num_steps | 29328 | ## Training and Testing steps Epoch | Train Loss | Test Loss | F-beta Score ---|---|---|--- | 0 | 0.000000 | 0.728393 | 0.430185 | | 1 | 0.338907 | 0.244195 | 0.899370 | | 2 | 0.205180 | 0.234936 | 0.911768 | | 3 | 0.162625 | 0.230914 | 0.922030 | | 4 | 0.115233 | 0.251901 | 0.919139 | | 5 | 0.083068 | 0.293237 | 0.928307 | | 6 | 0.053920 | 0.383740 | 0.894417 |
damgomz/ft_32_9e6_x1
damgomz
"2024-06-24T02:03:45Z"
0
0
transformers
[ "transformers", "safetensors", "albert", "text-classification", "en", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-06-23T11:35:48Z"
--- language: en tags: - text-classification pipeline_tag: text-classification widget: - text: GEPS Techno is the pioneer of hybridization of renewable energies at sea. We imagine, design and commercialize innovative off-grid systems that aim to generate power at sea, stabilize and collect data. The success of our low power platforms WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity platform. --- ## Environmental Impact (CODE CARBON DEFAULT) | Metric | Value | |--------------------------|---------------------------------| | Duration (in seconds) | 57080.44506430626 | | Emissions (Co2eq in kg) | 0.0345402688547107 | | CPU power (W) | 42.5 | | GPU power (W) | [No GPU] | | RAM power (W) | 3.75 | | CPU energy (kWh) | 0.6738651338876944 | | GPU energy (kWh) | [No GPU] | | RAM energy (kWh) | 0.0594583381131291 | | Consumed energy (kWh) | 0.7333234720008226 | | Country name | Switzerland | | Cloud provider | nan | | Cloud region | nan | | CPU count | 2 | | CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz | | GPU count | nan | | GPU model | nan | ## Environmental Impact (for one core) | Metric | Value | |--------------------------|---------------------------------| | CPU energy (kWh) | 0.10987985674878956 | | Emissions (Co2eq in kg) | 0.022356507650186618 | ## Note 19 juin 2024 ## My Config | Config | Value | |--------------------------|-----------------| | checkpoint | damgomz/ThunBERT_bs16_lr5_MLM | | model_name | ft_32_9e6_x1 | | sequence_length | 400 | | num_epoch | 6 | | learning_rate | 9e-06 | | batch_size | 32 | | weight_decay | 0.0 | | warm_up_prop | 0.0 | | drop_out_prob | 0.1 | | packing_length | 100 | | train_test_split | 0.2 | | num_steps | 29328 | ## Training and Testing steps Epoch | Train Loss | Test Loss | F-beta Score ---|---|---|--- | 0 | 0.000000 | 0.692473 | 0.592608 | | 1 | 0.316702 | 0.213212 | 0.924493 | | 2 | 0.175157 | 0.197533 | 0.930931 | | 3 | 0.121310 | 0.211575 | 0.928716 | | 4 | 0.076743 | 0.248609 | 0.903535 | | 5 | 0.040375 | 0.274094 | 0.934933 | | 6 | 0.022541 | 0.298416 | 0.918986 |
damgomz/ft_32_9e6_base_x2
damgomz
"2024-06-24T02:08:57Z"
0
0
transformers
[ "transformers", "safetensors", "albert", "text-classification", "en", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-06-23T11:36:03Z"
--- language: en tags: - text-classification pipeline_tag: text-classification widget: - text: GEPS Techno is the pioneer of hybridization of renewable energies at sea. We imagine, design and commercialize innovative off-grid systems that aim to generate power at sea, stabilize and collect data. The success of our low power platforms WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity platform. --- ## Environmental Impact (CODE CARBON DEFAULT) | Metric | Value | |--------------------------|---------------------------------| | Duration (in seconds) | 57392.34598207474 | | Emissions (Co2eq in kg) | 0.0347290095054235 | | CPU power (W) | 42.5 | | GPU power (W) | [No GPU] | | RAM power (W) | 3.75 | | CPU energy (kWh) | 0.677547371751068 | | GPU energy (kWh) | [No GPU] | | RAM energy (kWh) | 0.0597832476715247 | | Consumed energy (kWh) | 0.7373306194225937 | | Country name | Switzerland | | Cloud provider | nan | | Cloud region | nan | | CPU count | 2 | | CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz | | GPU count | nan | | GPU model | nan | ## Environmental Impact (for one core) | Metric | Value | |--------------------------|---------------------------------| | CPU energy (kWh) | 0.11048026601549386 | | Emissions (Co2eq in kg) | 0.02247866884297927 | ## Note 19 juin 2024 ## My Config | Config | Value | |--------------------------|-----------------| | checkpoint | albert-base-v2 | | model_name | ft_32_9e6_base_x2 | | sequence_length | 400 | | num_epoch | 6 | | learning_rate | 9e-06 | | batch_size | 32 | | weight_decay | 0.0 | | warm_up_prop | 0.0 | | drop_out_prob | 0.1 | | packing_length | 100 | | train_test_split | 0.2 | | num_steps | 29328 | ## Training and Testing steps Epoch | Train Loss | Test Loss | F-beta Score ---|---|---|--- | 0 | 0.000000 | 0.697832 | 0.570185 | | 1 | 0.322858 | 0.232063 | 0.906114 | | 2 | 0.186997 | 0.224411 | 0.913751 | | 3 | 0.133911 | 0.236310 | 0.921935 | | 4 | 0.082132 | 0.286539 | 0.908996 | | 5 | 0.047003 | 0.316794 | 0.919619 | | 6 | 0.025847 | 0.379194 | 0.922226 |
damgomz/ft_32_13e6_x2
damgomz
"2024-06-24T02:27:13Z"
0
0
transformers
[ "transformers", "safetensors", "albert", "text-classification", "en", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-06-23T11:36:03Z"
--- language: en tags: - text-classification pipeline_tag: text-classification widget: - text: GEPS Techno is the pioneer of hybridization of renewable energies at sea. We imagine, design and commercialize innovative off-grid systems that aim to generate power at sea, stabilize and collect data. The success of our low power platforms WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity platform. --- ## Environmental Impact (CODE CARBON DEFAULT) | Metric | Value | |--------------------------|---------------------------------| | Duration (in seconds) | 58488.23245668411 | | Emissions (Co2eq in kg) | 0.0353921421592872 | | CPU power (W) | 42.5 | | GPU power (W) | [No GPU] | | RAM power (W) | 3.75 | | CPU energy (kWh) | 0.6904848104238501 | | GPU energy (kWh) | [No GPU] | | RAM energy (kWh) | 0.0609247596338392 | | Consumed energy (kWh) | 0.7514095700576897 | | Country name | Switzerland | | Cloud provider | nan | | Cloud region | nan | | CPU count | 2 | | CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz | | GPU count | nan | | GPU model | nan | ## Environmental Impact (for one core) | Metric | Value | |--------------------------|---------------------------------| | CPU energy (kWh) | 0.11258984747911689 | | Emissions (Co2eq in kg) | 0.022907891045534607 | ## Note 19 juin 2024 ## My Config | Config | Value | |--------------------------|-----------------| | checkpoint | damgomz/fp_bs16_lr5_x2 | | model_name | ft_32_13e6_x2 | | sequence_length | 400 | | num_epoch | 6 | | learning_rate | 1.3e-05 | | batch_size | 32 | | weight_decay | 0.0 | | warm_up_prop | 0.0 | | drop_out_prob | 0.1 | | packing_length | 100 | | train_test_split | 0.2 | | num_steps | 29328 | ## Training and Testing steps Epoch | Train Loss | Test Loss | F-beta Score ---|---|---|--- | 0 | 0.000000 | 0.717840 | 0.334076 | | 1 | 0.315632 | 0.216841 | 0.910996 | | 2 | 0.164607 | 0.200249 | 0.919572 | | 3 | 0.107553 | 0.229726 | 0.926967 | | 4 | 0.059202 | 0.274537 | 0.919026 | | 5 | 0.031905 | 0.355991 | 0.906179 | | 6 | 0.024201 | 0.330097 | 0.924459 |
damgomz/ft_32_1e6_x12
damgomz
"2024-06-24T07:12:13Z"
0
0
transformers
[ "transformers", "safetensors", "albert", "text-classification", "en", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-06-23T11:36:05Z"
--- language: en tags: - text-classification pipeline_tag: text-classification widget: - text: GEPS Techno is the pioneer of hybridization of renewable energies at sea. We imagine, design and commercialize innovative off-grid systems that aim to generate power at sea, stabilize and collect data. The success of our low power platforms WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity platform. --- ## Environmental Impact (CODE CARBON DEFAULT) | Metric | Value | |--------------------------|---------------------------------| | Duration (in seconds) | 75587.16671800613 | | Emissions (Co2eq in kg) | 0.0457389520902086 | | CPU power (W) | 42.5 | | GPU power (W) | [No GPU] | | RAM power (W) | 3.75 | | CPU energy (kWh) | 0.8923465514885052 | | GPU energy (kWh) | [No GPU] | | RAM energy (kWh) | 0.0787358478280402 | | Consumed energy (kWh) | 0.9710823993165454 | | Country name | Switzerland | | Cloud provider | nan | | Cloud region | nan | | CPU count | 2 | | CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz | | GPU count | nan | | GPU model | nan | ## Environmental Impact (for one core) | Metric | Value | |--------------------------|---------------------------------| | CPU energy (kWh) | 0.1455052959321618 | | Emissions (Co2eq in kg) | 0.029604973631219063 | ## Note 19 juin 2024 ## My Config | Config | Value | |--------------------------|-----------------| | checkpoint | damgomz/fp_bs16_lr5_x12 | | model_name | ft_32_1e6_x12 | | sequence_length | 400 | | num_epoch | 6 | | learning_rate | 1e-06 | | batch_size | 32 | | weight_decay | 0.0 | | warm_up_prop | 0.0 | | drop_out_prob | 0.1 | | packing_length | 100 | | train_test_split | 0.2 | | num_steps | 29328 | ## Training and Testing steps Epoch | Train Loss | Test Loss | F-beta Score ---|---|---|--- | 0 | 0.000000 | 0.703722 | 0.444199 | | 1 | 0.556502 | 0.400478 | 0.859853 | | 2 | 0.339855 | 0.310943 | 0.885239 | | 3 | 0.278223 | 0.275363 | 0.895876 | | 4 | 0.244626 | 0.258921 | 0.901462 | | 5 | 0.224978 | 0.248962 | 0.902732 | | 6 | 0.209949 | 0.244824 | 0.904007 |
damgomz/ft_32_14e6_base_x12
damgomz
"2024-06-24T00:13:06Z"
0
0
transformers
[ "transformers", "safetensors", "albert", "text-classification", "en", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-06-23T11:36:12Z"
--- language: en tags: - text-classification pipeline_tag: text-classification widget: - text: GEPS Techno is the pioneer of hybridization of renewable energies at sea. We imagine, design and commercialize innovative off-grid systems that aim to generate power at sea, stabilize and collect data. The success of our low power platforms WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity platform. --- ## Environmental Impact (CODE CARBON DEFAULT) | Metric | Value | |--------------------------|---------------------------------| | Duration (in seconds) | [More Information Needed] | | Emissions (Co2eq in kg) | [More Information Needed] | | CPU power (W) | [NO CPU] | | GPU power (W) | [No GPU] | | RAM power (W) | [More Information Needed] | | CPU energy (kWh) | [No CPU] | | GPU energy (kWh) | [No GPU] | | RAM energy (kWh) | [More Information Needed] | | Consumed energy (kWh) | [More Information Needed] | | Country name | [More Information Needed] | | Cloud provider | [No Cloud] | | Cloud region | [No Cloud] | | CPU count | [No CPU] | | CPU model | [No CPU] | | GPU count | [No GPU] | | GPU model | [No GPU] | ## Environmental Impact (for one core) | Metric | Value | |--------------------------|---------------------------------| | CPU energy (kWh) | [No CPU] | | Emissions (Co2eq in kg) | [More Information Needed] | ## Note 19 juin 2024 ## My Config | Config | Value | |--------------------------|-----------------| | checkpoint | albert-base-v2 | | model_name | ft_32_14e6_base_x12 | | sequence_length | 400 | | num_epoch | 6 | | learning_rate | 1.4e-05 | | batch_size | 32 | | weight_decay | 0.0 | | warm_up_prop | 0.0 | | drop_out_prob | 0.1 | | packing_length | 100 | | train_test_split | 0.2 | | num_steps | 29328 | ## Training and Testing steps Epoch | Train Loss | Test Loss | F-beta Score ---|---|---|--- | 0 | 0.000000 | 0.759346 | 0.417754 | | 1 | 0.372233 | 0.266370 | 0.896978 | | 2 | 0.232219 | 0.233363 | 0.922029 |
damgomz/ft_32_9e6_x2
damgomz
"2024-06-24T02:11:36Z"
0
0
transformers
[ "transformers", "safetensors", "albert", "text-classification", "en", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-06-23T11:36:13Z"
--- language: en tags: - text-classification pipeline_tag: text-classification widget: - text: GEPS Techno is the pioneer of hybridization of renewable energies at sea. We imagine, design and commercialize innovative off-grid systems that aim to generate power at sea, stabilize and collect data. The success of our low power platforms WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity platform. --- ## Environmental Impact (CODE CARBON DEFAULT) | Metric | Value | |--------------------------|---------------------------------| | Duration (in seconds) | 57551.50695157051 | | Emissions (Co2eq in kg) | 0.0348253126522685 | | CPU power (W) | 42.5 | | GPU power (W) | [No GPU] | | RAM power (W) | 3.75 | | CPU energy (kWh) | 0.6794262211902263 | | GPU energy (kWh) | [No GPU] | | RAM energy (kWh) | 0.0599490076216557 | | Consumed energy (kWh) | 0.739375228811883 | | Country name | Switzerland | | Cloud provider | nan | | Cloud region | nan | | CPU count | 2 | | CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz | | GPU count | nan | | GPU model | nan | ## Environmental Impact (for one core) | Metric | Value | |--------------------------|---------------------------------| | CPU energy (kWh) | 0.11078665088177322 | | Emissions (Co2eq in kg) | 0.022541006889365115 | ## Note 19 juin 2024 ## My Config | Config | Value | |--------------------------|-----------------| | checkpoint | damgomz/fp_bs16_lr5_x2 | | model_name | ft_32_9e6_x2 | | sequence_length | 400 | | num_epoch | 6 | | learning_rate | 9e-06 | | batch_size | 32 | | weight_decay | 0.0 | | warm_up_prop | 0.0 | | drop_out_prob | 0.1 | | packing_length | 100 | | train_test_split | 0.2 | | num_steps | 29328 | ## Training and Testing steps Epoch | Train Loss | Test Loss | F-beta Score ---|---|---|--- | 0 | 0.000000 | 0.712329 | 0.333987 | | 1 | 0.354273 | 0.211401 | 0.920476 | | 2 | 0.174992 | 0.207867 | 0.947244 | | 3 | 0.128623 | 0.209257 | 0.924407 | | 4 | 0.078231 | 0.253078 | 0.916067 | | 5 | 0.042711 | 0.297808 | 0.922590 | | 6 | 0.023327 | 0.350900 | 0.910303 |
damgomz/ft_32_7e6_x12
damgomz
"2024-06-24T06:39:19Z"
0
0
transformers
[ "transformers", "safetensors", "albert", "text-classification", "en", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-06-23T11:36:25Z"
--- language: en tags: - text-classification pipeline_tag: text-classification widget: - text: GEPS Techno is the pioneer of hybridization of renewable energies at sea. We imagine, design and commercialize innovative off-grid systems that aim to generate power at sea, stabilize and collect data. The success of our low power platforms WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity platform. --- ## Environmental Impact (CODE CARBON DEFAULT) | Metric | Value | |--------------------------|---------------------------------| | Duration (in seconds) | 73613.58477163315 | | Emissions (Co2eq in kg) | 0.0445447026044134 | | CPU power (W) | 42.5 | | GPU power (W) | [No GPU] | | RAM power (W) | 3.75 | | CPU energy (kWh) | 0.8690472684154916 | | GPU energy (kWh) | [No GPU] | | RAM energy (kWh) | 0.0766800538157424 | | Consumed energy (kWh) | 0.945727322231236 | | Country name | Switzerland | | Cloud provider | nan | | Cloud region | nan | | CPU count | 2 | | CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz | | GPU count | nan | | GPU model | nan | ## Environmental Impact (for one core) | Metric | Value | |--------------------------|---------------------------------| | CPU energy (kWh) | 0.14170615068539383 | | Emissions (Co2eq in kg) | 0.028831987368889648 | ## Note 19 juin 2024 ## My Config | Config | Value | |--------------------------|-----------------| | checkpoint | damgomz/fp_bs16_lr5_x12 | | model_name | ft_32_7e6_x12 | | sequence_length | 400 | | num_epoch | 6 | | learning_rate | 7e-06 | | batch_size | 32 | | weight_decay | 0.0 | | warm_up_prop | 0.0 | | drop_out_prob | 0.1 | | packing_length | 100 | | train_test_split | 0.2 | | num_steps | 29328 | ## Training and Testing steps Epoch | Train Loss | Test Loss | F-beta Score ---|---|---|--- | 0 | 0.000000 | 0.728219 | 0.346931 | | 1 | 0.344118 | 0.240342 | 0.907142 | | 2 | 0.201766 | 0.214865 | 0.912862 | | 3 | 0.153653 | 0.219846 | 0.924143 | | 4 | 0.119165 | 0.239911 | 0.922952 | | 5 | 0.078146 | 0.279338 | 0.906833 | | 6 | 0.046365 | 0.306035 | 0.910655 |
damgomz/ft_32_14e6_x12
damgomz
"2024-06-24T06:59:02Z"
0
0
transformers
[ "transformers", "safetensors", "albert", "text-classification", "en", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-06-23T11:36:27Z"
--- language: en tags: - text-classification pipeline_tag: text-classification widget: - text: GEPS Techno is the pioneer of hybridization of renewable energies at sea. We imagine, design and commercialize innovative off-grid systems that aim to generate power at sea, stabilize and collect data. The success of our low power platforms WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity platform. --- ## Environmental Impact (CODE CARBON DEFAULT) | Metric | Value | |--------------------------|---------------------------------| | Duration (in seconds) | 74796.74694538116 | | Emissions (Co2eq in kg) | 0.0452606526190783 | | CPU power (W) | 42.5 | | GPU power (W) | [No GPU] | | RAM power (W) | 3.75 | | CPU energy (kWh) | 0.8830151409889248 | | GPU energy (kWh) | [No GPU] | | RAM energy (kWh) | 0.0779124957720438 | | Consumed energy (kWh) | 0.9609276367609688 | | Country name | Switzerland | | Cloud provider | nan | | Cloud region | nan | | CPU count | 2 | | CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz | | GPU count | nan | | GPU model | nan | ## Environmental Impact (for one core) | Metric | Value | |--------------------------|---------------------------------| | CPU energy (kWh) | 0.1439837378698587 | | Emissions (Co2eq in kg) | 0.02929539255360762 | ## Note 19 juin 2024 ## My Config | Config | Value | |--------------------------|-----------------| | checkpoint | damgomz/fp_bs16_lr5_x12 | | model_name | ft_32_14e6_x12 | | sequence_length | 400 | | num_epoch | 6 | | learning_rate | 1.4e-05 | | batch_size | 32 | | weight_decay | 0.0 | | warm_up_prop | 0.0 | | drop_out_prob | 0.1 | | packing_length | 100 | | train_test_split | 0.2 | | num_steps | 29328 | ## Training and Testing steps Epoch | Train Loss | Test Loss | F-beta Score ---|---|---|--- | 0 | 0.000000 | 0.731910 | 0.491272 | | 1 | 0.306111 | 0.214297 | 0.928138 | | 2 | 0.175756 | 0.218298 | 0.928514 | | 3 | 0.124031 | 0.235081 | 0.925558 | | 4 | 0.072022 | 0.331322 | 0.895840 | | 5 | 0.043305 | 0.356567 | 0.893325 | | 6 | 0.027826 | 0.386345 | 0.904293 |
damgomz/ft_32_4e6_x8
damgomz
"2024-06-24T06:39:00Z"
0
0
transformers
[ "transformers", "safetensors", "albert", "text-classification", "en", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-06-23T11:36:28Z"
--- language: en tags: - text-classification pipeline_tag: text-classification widget: - text: GEPS Techno is the pioneer of hybridization of renewable energies at sea. We imagine, design and commercialize innovative off-grid systems that aim to generate power at sea, stabilize and collect data. The success of our low power platforms WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity platform. --- ## Environmental Impact (CODE CARBON DEFAULT) | Metric | Value | |--------------------------|---------------------------------| | Duration (in seconds) | 73594.8251414299 | | Emissions (Co2eq in kg) | 0.0445333511652165 | | CPU power (W) | 42.5 | | GPU power (W) | [No GPU] | | RAM power (W) | 3.75 | | CPU energy (kWh) | 0.8688258257473497 | | GPU energy (kWh) | [No GPU] | | RAM energy (kWh) | 0.0766604943990709 | | Consumed energy (kWh) | 0.9454863201464196 | | Country name | Switzerland | | Cloud provider | nan | | Cloud region | nan | | CPU count | 2 | | CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz | | GPU count | nan | | GPU model | nan | ## Environmental Impact (for one core) | Metric | Value | |--------------------------|---------------------------------| | CPU energy (kWh) | 0.14167003839725253 | | Emissions (Co2eq in kg) | 0.028824639847060043 | ## Note 19 juin 2024 ## My Config | Config | Value | |--------------------------|-----------------| | checkpoint | damgomz/fp_bs16_lr5_x8 | | model_name | ft_32_4e6_x8 | | sequence_length | 400 | | num_epoch | 6 | | learning_rate | 4e-06 | | batch_size | 32 | | weight_decay | 0.0 | | warm_up_prop | 0.0 | | drop_out_prob | 0.1 | | packing_length | 100 | | train_test_split | 0.2 | | num_steps | 29328 | ## Training and Testing steps Epoch | Train Loss | Test Loss | F-beta Score ---|---|---|--- | 0 | 0.000000 | 0.710025 | 0.263873 | | 1 | 0.391371 | 0.259821 | 0.899125 | | 2 | 0.222361 | 0.221185 | 0.916312 | | 3 | 0.182053 | 0.217045 | 0.920640 | | 4 | 0.152476 | 0.213209 | 0.918197 | | 5 | 0.123926 | 0.229059 | 0.915493 | | 6 | 0.096749 | 0.244092 | 0.923752 |
damgomz/ft_32_13e6_base_x4
damgomz
"2024-06-24T02:26:58Z"
0
0
transformers
[ "transformers", "safetensors", "albert", "text-classification", "en", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-06-23T11:36:34Z"
--- language: en tags: - text-classification pipeline_tag: text-classification widget: - text: GEPS Techno is the pioneer of hybridization of renewable energies at sea. We imagine, design and commercialize innovative off-grid systems that aim to generate power at sea, stabilize and collect data. The success of our low power platforms WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity platform. --- ## Environmental Impact (CODE CARBON DEFAULT) | Metric | Value | |--------------------------|---------------------------------| | Duration (in seconds) | 58472.9090526104 | | Emissions (Co2eq in kg) | 0.0353828713730564 | | CPU power (W) | 42.5 | | GPU power (W) | [No GPU] | | RAM power (W) | 3.75 | | CPU energy (kWh) | 0.6903039364420729 | | GPU energy (kWh) | [No GPU] | | RAM energy (kWh) | 0.0609088058151305 | | Consumed energy (kWh) | 0.7512127422572015 | | Country name | Switzerland | | Cloud provider | nan | | Cloud region | nan | | CPU count | 2 | | CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz | | GPU count | nan | | GPU model | nan | ## Environmental Impact (for one core) | Metric | Value | |--------------------------|---------------------------------| | CPU energy (kWh) | 0.112560349926275 | | Emissions (Co2eq in kg) | 0.02290188937893907 | ## Note 19 juin 2024 ## My Config | Config | Value | |--------------------------|-----------------| | checkpoint | albert-base-v2 | | model_name | ft_32_13e6_base_x4 | | sequence_length | 400 | | num_epoch | 6 | | learning_rate | 1.3e-05 | | batch_size | 32 | | weight_decay | 0.0 | | warm_up_prop | 0.0 | | drop_out_prob | 0.1 | | packing_length | 100 | | train_test_split | 0.2 | | num_steps | 29328 | ## Training and Testing steps Epoch | Train Loss | Test Loss | F-beta Score ---|---|---|--- | 0 | 0.000000 | 0.769925 | 0.338207 | | 1 | 0.318355 | 0.234563 | 0.926135 | | 2 | 0.195246 | 0.224179 | 0.909345 | | 3 | 0.144236 | 0.233321 | 0.929572 | | 4 | 0.095344 | 0.285724 | 0.918945 | | 5 | 0.062180 | 0.348233 | 0.910088 | | 6 | 0.041635 | 0.362033 | 0.907800 |
damgomz/ft_32_4e6_x4
damgomz
"2024-06-24T06:40:06Z"
0
0
transformers
[ "transformers", "safetensors", "albert", "text-classification", "en", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-06-23T11:36:41Z"
--- language: en tags: - text-classification pipeline_tag: text-classification widget: - text: GEPS Techno is the pioneer of hybridization of renewable energies at sea. We imagine, design and commercialize innovative off-grid systems that aim to generate power at sea, stabilize and collect data. The success of our low power platforms WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity platform. --- ## Environmental Impact (CODE CARBON DEFAULT) | Metric | Value | |--------------------------|---------------------------------| | Duration (in seconds) | 73660.23208665848 | | Emissions (Co2eq in kg) | 0.0445729217319392 | | CPU power (W) | 42.5 | | GPU power (W) | [No GPU] | | RAM power (W) | 3.75 | | CPU energy (kWh) | 0.869597842776774 | | GPU energy (kWh) | [No GPU] | | RAM energy (kWh) | 0.0767285989535349 | | Consumed energy (kWh) | 0.9463264417303076 | | Country name | Switzerland | | Cloud provider | nan | | Cloud region | nan | | CPU count | 2 | | CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz | | GPU count | nan | | GPU model | nan | ## Environmental Impact (for one core) | Metric | Value | |--------------------------|---------------------------------| | CPU energy (kWh) | 0.14179594676681756 | | Emissions (Co2eq in kg) | 0.02885025756727457 | ## Note 19 juin 2024 ## My Config | Config | Value | |--------------------------|-----------------| | checkpoint | damgomz/fp_bs16_lr5_x4 | | model_name | ft_32_4e6_x4 | | sequence_length | 400 | | num_epoch | 6 | | learning_rate | 4e-06 | | batch_size | 32 | | weight_decay | 0.0 | | warm_up_prop | 0.0 | | drop_out_prob | 0.1 | | packing_length | 100 | | train_test_split | 0.2 | | num_steps | 29328 | ## Training and Testing steps Epoch | Train Loss | Test Loss | F-beta Score ---|---|---|--- | 0 | 0.000000 | 0.705749 | 0.334336 | | 1 | 0.388053 | 0.253232 | 0.906801 | | 2 | 0.216997 | 0.217991 | 0.922703 | | 3 | 0.174467 | 0.224384 | 0.919802 | | 4 | 0.140312 | 0.213055 | 0.926038 | | 5 | 0.113477 | 0.223005 | 0.916428 | | 6 | 0.083865 | 0.240709 | 0.926557 |
damgomz/ft_32_4e6_base_x12
damgomz
"2024-06-24T06:47:42Z"
0
0
transformers
[ "transformers", "safetensors", "albert", "text-classification", "en", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-06-23T11:37:01Z"
--- language: en tags: - text-classification pipeline_tag: text-classification widget: - text: GEPS Techno is the pioneer of hybridization of renewable energies at sea. We imagine, design and commercialize innovative off-grid systems that aim to generate power at sea, stabilize and collect data. The success of our low power platforms WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity platform. --- ## Environmental Impact (CODE CARBON DEFAULT) | Metric | Value | |--------------------------|---------------------------------| | Duration (in seconds) | 74115.86743354797 | | Emissions (Co2eq in kg) | 0.0448486418826797 | | CPU power (W) | 42.5 | | GPU power (W) | [No GPU] | | RAM power (W) | 3.75 | | CPU energy (kWh) | 0.8749770166450078 | | GPU energy (kWh) | [No GPU] | | RAM energy (kWh) | 0.0772032318142553 | | Consumed energy (kWh) | 0.952180248459263 | | Country name | Switzerland | | Cloud provider | nan | | Cloud region | nan | | CPU count | 2 | | CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz | | GPU count | nan | | GPU model | nan | ## Environmental Impact (for one core) | Metric | Value | |--------------------------|---------------------------------| | CPU energy (kWh) | 0.14267304480957985 | | Emissions (Co2eq in kg) | 0.029028714744806287 | ## Note 19 juin 2024 ## My Config | Config | Value | |--------------------------|-----------------| | checkpoint | albert-base-v2 | | model_name | ft_32_4e6_base_x12 | | sequence_length | 400 | | num_epoch | 6 | | learning_rate | 4e-06 | | batch_size | 32 | | weight_decay | 0.0 | | warm_up_prop | 0.0 | | drop_out_prob | 0.1 | | packing_length | 100 | | train_test_split | 0.2 | | num_steps | 29328 | ## Training and Testing steps Epoch | Train Loss | Test Loss | F-beta Score ---|---|---|--- | 0 | 0.000000 | 0.727735 | 0.456730 | | 1 | 0.396399 | 0.319076 | 0.889635 | | 2 | 0.285400 | 0.280777 | 0.903426 | | 3 | 0.246437 | 0.259405 | 0.917629 | | 4 | 0.216868 | 0.237590 | 0.903710 | | 5 | 0.189485 | 0.238768 | 0.925519 | | 6 | 0.174331 | 0.239025 | 0.904234 |
MJ-Bench/DiffusionDPO-alignment-gemini-1.5
MJ-Bench
"2024-06-23T11:37:55Z"
0
0
transformers
[ "transformers", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "endpoints_compatible", "region:us" ]
text-to-image
"2024-06-23T11:37:53Z"
--- tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image inference: true --- # Aligned Diffusion Model via DPO Diffusion Model Aligned with thef following reward model and DPO algorithm ``` close-sourced vlm: claude3-opus gemini-1.5 gpt-4o gpt-4v open-sourced vlm: internvl-1.5 score model: hps-2.1 ``` ## How to Use You can load the model and perform inference as follows: ```python from diffusers import StableDiffusionPipeline, UNet2DConditionModel pretrained_model_name = "runwayml/stable-diffusion-v1-5" dpo_unet = UNet2DConditionModel.from_pretrained( "path/to/checkpoint", subfolder='unet', torch_dtype=torch.float16 ).to('cuda') pipeline = StableDiffusionPipeline.from_pretrained(pretrained_model_name, torch_dtype=torch.float16) pipeline = pipeline.to('cuda') pipeline.safety_checker = None pipeline.unet = dpo_unet generator = torch.Generator(device='cuda') generator = generator.manual_seed(1) prompt = "a pink flower" image = pipeline(prompt=prompt, generator=generator, guidance_scale=gs).images[0] ```
SwimChoi/villama2-7b-chat-Israel-lora
SwimChoi
"2024-06-23T11:39:29Z"
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:meta-llama/Llama-2-7b-chat-hf", "region:us" ]
null
"2024-06-23T11:39:26Z"
--- library_name: peft base_model: meta-llama/Llama-2-7b-chat-hf --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.10.1.dev0
rza96/my-finetuned-emotion-distilbert
rza96
"2024-06-23T14:13:02Z"
0
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-06-23T11:39:51Z"
Entry not found
0xfaskety/Qwen-Qwen1.5-7B-1719142846
0xfaskety
"2024-06-23T11:40:46Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T11:40:46Z"
Entry not found
oz1115/bloomz-560m_PROMPT_TUNING_CAUSAL_LM
oz1115
"2024-06-23T11:42:47Z"
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-06-23T11:42:46Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
thyti/deneme
thyti
"2024-06-23T11:44:37Z"
0
0
null
[ "license:llama2", "region:us" ]
null
"2024-06-23T11:44:37Z"
--- license: llama2 ---
SwimChoi/villama2-7b-chat-Lithuania-lora
SwimChoi
"2024-06-23T11:46:02Z"
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:meta-llama/Llama-2-7b-chat-hf", "region:us" ]
null
"2024-06-23T11:45:58Z"
--- library_name: peft base_model: meta-llama/Llama-2-7b-chat-hf --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.10.1.dev0
gechim/PhoBert_Lexical_Dataset59KCoDuoi
gechim
"2024-06-23T11:48:15Z"
0
0
transformers
[ "transformers", "safetensors", "roberta", "generated_from_trainer", "base_model:vinai/phobert-base-v2", "endpoints_compatible", "region:us" ]
null
"2024-06-23T11:47:35Z"
--- base_model: vinai/phobert-base-v2 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: PhoBert_Lexical_Dataset59KCoDuoi results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # PhoBert_Lexical_Dataset59KCoDuoi This model is a fine-tuned version of [vinai/phobert-base-v2](https://huggingface.co/vinai/phobert-base-v2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2741 - Accuracy: 0.9600 - F1: 0.9602 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-------:|:-----:|:---------------:|:--------:|:------:| | No log | 0.2558 | 200 | 0.1794 | 0.9396 | 0.9400 | | No log | 0.5115 | 400 | 0.1533 | 0.9475 | 0.9479 | | No log | 0.7673 | 600 | 0.1522 | 0.9496 | 0.9499 | | 0.1767 | 1.0230 | 800 | 0.1494 | 0.9542 | 0.9545 | | 0.1767 | 1.2788 | 1000 | 0.1485 | 0.9519 | 0.9520 | | 0.1767 | 1.5345 | 1200 | 0.1608 | 0.9524 | 0.9523 | | 0.1767 | 1.7903 | 1400 | 0.1223 | 0.9580 | 0.9582 | | 0.1176 | 2.0460 | 1600 | 0.1462 | 0.9600 | 0.9603 | | 0.1176 | 2.3018 | 1800 | 0.1363 | 0.9588 | 0.9591 | | 0.1176 | 2.5575 | 2000 | 0.1441 | 0.9574 | 0.9577 | | 0.1176 | 2.8133 | 2200 | 0.1369 | 0.9566 | 0.9568 | | 0.0972 | 3.0691 | 2400 | 0.1530 | 0.9547 | 0.9550 | | 0.0972 | 3.3248 | 2600 | 0.1278 | 0.9607 | 0.9608 | | 0.0972 | 3.5806 | 2800 | 0.1334 | 0.9604 | 0.9606 | | 0.0972 | 3.8363 | 3000 | 0.1280 | 0.9608 | 0.9609 | | 0.0821 | 4.0921 | 3200 | 0.1379 | 0.9603 | 0.9604 | | 0.0821 | 4.3478 | 3400 | 0.1466 | 0.9587 | 0.9589 | | 0.0821 | 4.6036 | 3600 | 0.1379 | 0.9604 | 0.9606 | | 0.0821 | 4.8593 | 3800 | 0.1347 | 0.9606 | 0.9607 | | 0.0687 | 5.1151 | 4000 | 0.1492 | 0.9614 | 0.9614 | | 0.0687 | 5.3708 | 4200 | 0.1611 | 0.9606 | 0.9606 | | 0.0687 | 5.6266 | 4400 | 0.1407 | 0.9594 | 0.9596 | | 0.0687 | 5.8824 | 4600 | 0.1446 | 0.9590 | 0.9591 | | 0.0584 | 6.1381 | 4800 | 0.1659 | 0.9575 | 0.9578 | | 0.0584 | 6.3939 | 5000 | 0.1666 | 0.9602 | 0.9602 | | 0.0584 | 6.6496 | 5200 | 0.1683 | 0.9586 | 0.9588 | | 0.0584 | 6.9054 | 5400 | 0.1668 | 0.9609 | 0.9611 | | 0.0477 | 7.1611 | 5600 | 0.1844 | 0.9580 | 0.9582 | | 0.0477 | 7.4169 | 5800 | 0.1695 | 0.9626 | 0.9627 | | 0.0477 | 7.6726 | 6000 | 0.1767 | 0.9596 | 0.9597 | | 0.0477 | 7.9284 | 6200 | 0.1960 | 0.9594 | 0.9596 | | 0.0397 | 8.1841 | 6400 | 0.1932 | 0.9599 | 0.9600 | | 0.0397 | 8.4399 | 6600 | 0.1990 | 0.9593 | 0.9594 | | 0.0397 | 8.6957 | 6800 | 0.1999 | 0.9602 | 0.9603 | | 0.0397 | 8.9514 | 7000 | 0.1803 | 0.9577 | 0.9580 | | 0.0349 | 9.2072 | 7200 | 0.2082 | 0.9574 | 0.9575 | | 0.0349 | 9.4629 | 7400 | 0.2075 | 0.9597 | 0.9598 | | 0.0349 | 9.7187 | 7600 | 0.2269 | 0.9577 | 0.9577 | | 0.0349 | 9.9744 | 7800 | 0.1990 | 0.9602 | 0.9602 | | 0.0294 | 10.2302 | 8000 | 0.1987 | 0.9599 | 0.9600 | | 0.0294 | 10.4859 | 8200 | 0.2066 | 0.9563 | 0.9563 | | 0.0294 | 10.7417 | 8400 | 0.2149 | 0.9595 | 0.9597 | | 0.0257 | 10.9974 | 8600 | 0.2179 | 0.9609 | 0.9610 | | 0.0257 | 11.2532 | 8800 | 0.2337 | 0.9593 | 0.9594 | | 0.0257 | 11.5090 | 9000 | 0.2499 | 0.9573 | 0.9573 | | 0.0257 | 11.7647 | 9200 | 0.2323 | 0.9575 | 0.9577 | | 0.021 | 12.0205 | 9400 | 0.2330 | 0.9599 | 0.9601 | | 0.021 | 12.2762 | 9600 | 0.2321 | 0.9603 | 0.9604 | | 0.021 | 12.5320 | 9800 | 0.2431 | 0.9594 | 0.9594 | | 0.021 | 12.7877 | 10000 | 0.2487 | 0.9581 | 0.9583 | | 0.017 | 13.0435 | 10200 | 0.2606 | 0.9570 | 0.9570 | | 0.017 | 13.2992 | 10400 | 0.2450 | 0.9582 | 0.9583 | | 0.017 | 13.5550 | 10600 | 0.2647 | 0.9593 | 0.9596 | | 0.017 | 13.8107 | 10800 | 0.2494 | 0.9595 | 0.9597 | | 0.0155 | 14.0665 | 11000 | 0.2482 | 0.9582 | 0.9584 | | 0.0155 | 14.3223 | 11200 | 0.2552 | 0.9605 | 0.9606 | | 0.0155 | 14.5780 | 11400 | 0.2581 | 0.9583 | 0.9585 | | 0.0155 | 14.8338 | 11600 | 0.2553 | 0.9609 | 0.9611 | | 0.0146 | 15.0895 | 11800 | 0.2601 | 0.9591 | 0.9592 | | 0.0146 | 15.3453 | 12000 | 0.2574 | 0.9593 | 0.9594 | | 0.0146 | 15.6010 | 12200 | 0.2562 | 0.9614 | 0.9615 | | 0.0146 | 15.8568 | 12400 | 0.2588 | 0.9596 | 0.9597 | | 0.0114 | 16.1125 | 12600 | 0.2621 | 0.9581 | 0.9581 | | 0.0114 | 16.3683 | 12800 | 0.2593 | 0.9591 | 0.9593 | | 0.0114 | 16.6240 | 13000 | 0.2611 | 0.9607 | 0.9608 | | 0.0114 | 16.8798 | 13200 | 0.2668 | 0.9600 | 0.9602 | | 0.0091 | 17.1355 | 13400 | 0.2554 | 0.9618 | 0.9620 | | 0.0091 | 17.3913 | 13600 | 0.2707 | 0.9596 | 0.9597 | | 0.0091 | 17.6471 | 13800 | 0.2742 | 0.9597 | 0.9599 | | 0.0091 | 17.9028 | 14000 | 0.2777 | 0.9590 | 0.9591 | | 0.0057 | 18.1586 | 14200 | 0.2737 | 0.9596 | 0.9597 | | 0.0057 | 18.4143 | 14400 | 0.2731 | 0.9598 | 0.9599 | | 0.0057 | 18.6701 | 14600 | 0.2693 | 0.9606 | 0.9607 | | 0.0057 | 18.9258 | 14800 | 0.2754 | 0.9597 | 0.9598 | | 0.0074 | 19.1816 | 15000 | 0.2729 | 0.9602 | 0.9602 | | 0.0074 | 19.4373 | 15200 | 0.2784 | 0.9595 | 0.9596 | | 0.0074 | 19.6931 | 15400 | 0.2766 | 0.9598 | 0.9599 | | 0.0074 | 19.9488 | 15600 | 0.2741 | 0.9600 | 0.9602 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.1.2 - Datasets 2.19.2 - Tokenizers 0.19.1
PIXMELT/Qwarte7B-llama3-merged
PIXMELT
"2024-06-23T11:51:04Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "bitsandbytes", "region:us" ]
text-generation
"2024-06-23T11:49:37Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
SwimChoi/villama2-7b-chat-Slovakia-lora
SwimChoi
"2024-06-23T11:49:59Z"
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:meta-llama/Llama-2-7b-chat-hf", "region:us" ]
null
"2024-06-23T11:49:55Z"
--- library_name: peft base_model: meta-llama/Llama-2-7b-chat-hf --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.10.1.dev0
SwimChoi/villama2-7b-chat-Ukraine-lora
SwimChoi
"2024-06-23T11:51:17Z"
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:meta-llama/Llama-2-7b-chat-hf", "region:us" ]
null
"2024-06-23T11:51:15Z"
--- library_name: peft base_model: meta-llama/Llama-2-7b-chat-hf --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.10.1.dev0
SwimChoi/villama2-7b-chat-Russia-lora
SwimChoi
"2024-06-23T11:52:35Z"
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:meta-llama/Llama-2-7b-chat-hf", "region:us" ]
null
"2024-06-23T11:52:33Z"
--- library_name: peft base_model: meta-llama/Llama-2-7b-chat-hf --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.10.1.dev0
damgomz/ft_32_4e6_x12
damgomz
"2024-06-24T10:12:55Z"
0
0
transformers
[ "transformers", "safetensors", "albert", "text-classification", "en", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-06-23T11:53:02Z"
--- language: en tags: - text-classification pipeline_tag: text-classification widget: - text: GEPS Techno is the pioneer of hybridization of renewable energies at sea. We imagine, design and commercialize innovative off-grid systems that aim to generate power at sea, stabilize and collect data. The success of our low power platforms WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity platform. --- ## Environmental Impact (CODE CARBON DEFAULT) | Metric | Value | |--------------------------|---------------------------------| | Duration (in seconds) | 86429.07593941689 | | Emissions (Co2eq in kg) | 0.0522995229923188 | | CPU power (W) | 42.5 | | GPU power (W) | [No GPU] | | RAM power (W) | 3.75 | | CPU energy (kWh) | 1.020340481146506 | | GPU energy (kWh) | [No GPU] | | RAM energy (kWh) | 0.0900292136013505 | | Consumed energy (kWh) | 1.1103696947478574 | | Country name | Switzerland | | Cloud provider | nan | | Cloud region | nan | | CPU count | 2 | | CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz | | GPU count | nan | | GPU model | nan | ## Environmental Impact (for one core) | Metric | Value | |--------------------------|---------------------------------| | CPU energy (kWh) | 0.1663759711833775 | | Emissions (Co2eq in kg) | 0.03385138807627161 | ## Note 19 juin 2024 ## My Config | Config | Value | |--------------------------|-----------------| | checkpoint | damgomz/fp_bs16_lr5_x12 | | model_name | ft_32_4e6_x12 | | sequence_length | 400 | | num_epoch | 6 | | learning_rate | 4e-06 | | batch_size | 32 | | weight_decay | 0.0 | | warm_up_prop | 0.0 | | drop_out_prob | 0.1 | | packing_length | 100 | | train_test_split | 0.2 | | num_steps | 29328 | ## Training and Testing steps Epoch | Train Loss | Test Loss | F-beta Score ---|---|---|--- | 0 | 0.000000 | 0.711692 | 0.490005 | | 1 | 0.398099 | 0.273404 | 0.886336 | | 2 | 0.235050 | 0.238728 | 0.920556 | | 3 | 0.189052 | 0.218307 | 0.919261 | | 4 | 0.156630 | 0.215790 | 0.925542 | | 5 | 0.130936 | 0.222257 | 0.930175 | | 6 | 0.100910 | 0.239654 | 0.920843 |
AIStudioIR/llama3-model
AIStudioIR
"2024-06-23T11:53:36Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T11:53:36Z"
Entry not found
SwimChoi/villama2-7b-chat-Czech-lora
SwimChoi
"2024-06-23T11:53:54Z"
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:meta-llama/Llama-2-7b-chat-hf", "region:us" ]
null
"2024-06-23T11:53:51Z"
--- library_name: peft base_model: meta-llama/Llama-2-7b-chat-hf --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.10.1.dev0
SwimChoi/villama2-7b-chat-Ireland-lora
SwimChoi
"2024-06-23T11:55:12Z"
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:meta-llama/Llama-2-7b-chat-hf", "region:us" ]
null
"2024-06-23T11:55:09Z"
--- library_name: peft base_model: meta-llama/Llama-2-7b-chat-hf --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.10.1.dev0
SwimChoi/villama2-7b-chat-Iceland-lora
SwimChoi
"2024-06-23T11:57:48Z"
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:meta-llama/Llama-2-7b-chat-hf", "region:us" ]
null
"2024-06-23T11:57:46Z"
--- library_name: peft base_model: meta-llama/Llama-2-7b-chat-hf --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.10.1.dev0
itay-nakash/model_cb2b1e6d90_sweep_faithful-hill-855
itay-nakash
"2024-06-23T11:58:14Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T11:58:14Z"
Entry not found
itay-nakash/model_6c19c2b8b0_sweep_breezy-morning-857
itay-nakash
"2024-06-23T11:59:22Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T11:59:22Z"
Entry not found
itay-nakash/model_71dd0b85f5_sweep_still-dragon-856
itay-nakash
"2024-06-23T11:59:29Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T11:59:29Z"
Entry not found
zahraPoori76/Whisper-persian-quran
zahraPoori76
"2024-06-26T12:31:05Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T11:59:45Z"
Entry not found
itay-nakash/model_3f5c893599_sweep_revived-cloud-861
itay-nakash
"2024-06-23T12:00:32Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T12:00:32Z"
Entry not found
itay-nakash/model_0b8bff813c_sweep_sweet-dragon-859
itay-nakash
"2024-06-23T12:00:51Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T12:00:51Z"
Entry not found
itay-nakash/model_2ec771cb72_sweep_stellar-thunder-858
itay-nakash
"2024-06-23T12:01:01Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T12:01:01Z"
Entry not found
itay-nakash/model_6d5c5a99e5_sweep_blooming-glitter-860
itay-nakash
"2024-06-23T12:01:01Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T12:01:01Z"
Entry not found
SwimChoi/villama2-7b-chat-Norway-lora
SwimChoi
"2024-06-23T12:01:41Z"
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:meta-llama/Llama-2-7b-chat-hf", "region:us" ]
null
"2024-06-23T12:01:38Z"
--- library_name: peft base_model: meta-llama/Llama-2-7b-chat-hf --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.10.1.dev0
itay-nakash/model_9539ee4e06_sweep_summer-flower-862
itay-nakash
"2024-06-23T12:02:11Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T12:02:11Z"
Entry not found
SwimChoi/villama2-7b-chat-Italy-lora
SwimChoi
"2024-06-23T12:03:01Z"
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:meta-llama/Llama-2-7b-chat-hf", "region:us" ]
null
"2024-06-23T12:02:56Z"
--- library_name: peft base_model: meta-llama/Llama-2-7b-chat-hf --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.10.1.dev0
Chahatdatascience/config-2
Chahatdatascience
"2024-06-23T13:40:19Z"
0
0
transformers
[ "transformers", "safetensors", "wav2vec2", "automatic-speech-recognition", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
"2024-06-23T12:03:09Z"
Entry not found
itay-nakash/model_47b4c49ddb_sweep_restful-disco-863
itay-nakash
"2024-06-23T12:03:15Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T12:03:15Z"
Entry not found
itay-nakash/model_fb5a361adf_sweep_visionary-deluge-864
itay-nakash
"2024-06-23T12:04:57Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T12:04:57Z"
Entry not found
gechim/XMLRoberta_Lexical_Dataset59KCoDuoi
gechim
"2024-06-23T12:06:32Z"
0
0
transformers
[ "transformers", "safetensors", "xlm-roberta", "generated_from_trainer", "base_model:FacebookAI/xlm-roberta-base", "license:mit", "endpoints_compatible", "region:us" ]
null
"2024-06-23T12:06:00Z"
--- license: mit base_model: FacebookAI/xlm-roberta-base tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: XMLRoberta_Lexical_Dataset59KCoDuoi results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # XMLRoberta_Lexical_Dataset59KCoDuoi This model is a fine-tuned version of [FacebookAI/xlm-roberta-base](https://huggingface.co/FacebookAI/xlm-roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3668 - Accuracy: 0.9580 - F1: 0.9581 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-------:|:-----:|:---------------:|:--------:|:------:| | No log | 0.2558 | 200 | 0.2520 | 0.9109 | 0.9116 | | No log | 0.5115 | 400 | 0.1839 | 0.9393 | 0.9398 | | No log | 0.7673 | 600 | 0.2109 | 0.9362 | 0.9369 | | 0.2271 | 1.0230 | 800 | 0.1567 | 0.9510 | 0.9512 | | 0.2271 | 1.2788 | 1000 | 0.1477 | 0.9500 | 0.9502 | | 0.2271 | 1.5345 | 1200 | 0.1551 | 0.9526 | 0.9529 | | 0.2271 | 1.7903 | 1400 | 0.1419 | 0.9538 | 0.9542 | | 0.1372 | 2.0460 | 1600 | 0.1607 | 0.9550 | 0.9554 | | 0.1372 | 2.3018 | 1800 | 0.1590 | 0.9568 | 0.9568 | | 0.1372 | 2.5575 | 2000 | 0.1415 | 0.9595 | 0.9596 | | 0.1372 | 2.8133 | 2200 | 0.1473 | 0.9580 | 0.9582 | | 0.1109 | 3.0691 | 2400 | 0.1644 | 0.9538 | 0.9542 | | 0.1109 | 3.3248 | 2600 | 0.1300 | 0.9605 | 0.9607 | | 0.1109 | 3.5806 | 2800 | 0.1664 | 0.9588 | 0.9591 | | 0.1109 | 3.8363 | 3000 | 0.1395 | 0.9570 | 0.9572 | | 0.0958 | 4.0921 | 3200 | 0.1602 | 0.9602 | 0.9603 | | 0.0958 | 4.3478 | 3400 | 0.1566 | 0.9615 | 0.9616 | | 0.0958 | 4.6036 | 3600 | 0.1413 | 0.9583 | 0.9586 | | 0.0958 | 4.8593 | 3800 | 0.1973 | 0.9582 | 0.9582 | | 0.083 | 5.1151 | 4000 | 0.1469 | 0.9591 | 0.9594 | | 0.083 | 5.3708 | 4200 | 0.1541 | 0.9603 | 0.9605 | | 0.083 | 5.6266 | 4400 | 0.1676 | 0.9585 | 0.9587 | | 0.083 | 5.8824 | 4600 | 0.1687 | 0.9602 | 0.9604 | | 0.0734 | 6.1381 | 4800 | 0.1865 | 0.9591 | 0.9592 | | 0.0734 | 6.3939 | 5000 | 0.1723 | 0.9569 | 0.9569 | | 0.0734 | 6.6496 | 5200 | 0.1761 | 0.9587 | 0.9589 | | 0.0734 | 6.9054 | 5400 | 0.1596 | 0.9613 | 0.9614 | | 0.0607 | 7.1611 | 5600 | 0.2193 | 0.9586 | 0.9588 | | 0.0607 | 7.4169 | 5800 | 0.1984 | 0.9595 | 0.9596 | | 0.0607 | 7.6726 | 6000 | 0.1745 | 0.9587 | 0.9589 | | 0.0607 | 7.9284 | 6200 | 0.1939 | 0.9614 | 0.9615 | | 0.0547 | 8.1841 | 6400 | 0.2081 | 0.9591 | 0.9592 | | 0.0547 | 8.4399 | 6600 | 0.2048 | 0.9599 | 0.9601 | | 0.0547 | 8.6957 | 6800 | 0.2260 | 0.9563 | 0.9565 | | 0.0547 | 8.9514 | 7000 | 0.1786 | 0.9598 | 0.9600 | | 0.047 | 9.2072 | 7200 | 0.2181 | 0.9596 | 0.9597 | | 0.047 | 9.4629 | 7400 | 0.2120 | 0.9602 | 0.9603 | | 0.047 | 9.7187 | 7600 | 0.2266 | 0.9597 | 0.9597 | | 0.047 | 9.9744 | 7800 | 0.2128 | 0.9581 | 0.9583 | | 0.0409 | 10.2302 | 8000 | 0.2207 | 0.9607 | 0.9608 | | 0.0409 | 10.4859 | 8200 | 0.2375 | 0.9597 | 0.9599 | | 0.0409 | 10.7417 | 8400 | 0.2241 | 0.9592 | 0.9593 | | 0.0368 | 10.9974 | 8600 | 0.2181 | 0.9613 | 0.9613 | | 0.0368 | 11.2532 | 8800 | 0.2574 | 0.9598 | 0.9599 | | 0.0368 | 11.5090 | 9000 | 0.2598 | 0.9602 | 0.9602 | | 0.0368 | 11.7647 | 9200 | 0.2448 | 0.9592 | 0.9594 | | 0.0309 | 12.0205 | 9400 | 0.2521 | 0.9593 | 0.9594 | | 0.0309 | 12.2762 | 9600 | 0.2824 | 0.9599 | 0.9601 | | 0.0309 | 12.5320 | 9800 | 0.2606 | 0.9600 | 0.9602 | | 0.0309 | 12.7877 | 10000 | 0.2841 | 0.9610 | 0.9612 | | 0.0256 | 13.0435 | 10200 | 0.2662 | 0.9590 | 0.9591 | | 0.0256 | 13.2992 | 10400 | 0.2839 | 0.9582 | 0.9582 | | 0.0256 | 13.5550 | 10600 | 0.3053 | 0.9579 | 0.9580 | | 0.0256 | 13.8107 | 10800 | 0.2697 | 0.9573 | 0.9574 | | 0.0229 | 14.0665 | 11000 | 0.2741 | 0.9583 | 0.9584 | | 0.0229 | 14.3223 | 11200 | 0.2881 | 0.9596 | 0.9597 | | 0.0229 | 14.5780 | 11400 | 0.2921 | 0.9586 | 0.9588 | | 0.0229 | 14.8338 | 11600 | 0.3162 | 0.9598 | 0.9600 | | 0.0196 | 15.0895 | 11800 | 0.2989 | 0.9575 | 0.9576 | | 0.0196 | 15.3453 | 12000 | 0.3267 | 0.9568 | 0.9570 | | 0.0196 | 15.6010 | 12200 | 0.3113 | 0.9593 | 0.9594 | | 0.0196 | 15.8568 | 12400 | 0.3198 | 0.9595 | 0.9597 | | 0.0167 | 16.1125 | 12600 | 0.3355 | 0.9580 | 0.9582 | | 0.0167 | 16.3683 | 12800 | 0.3525 | 0.9566 | 0.9569 | | 0.0167 | 16.6240 | 13000 | 0.3337 | 0.9582 | 0.9584 | | 0.0167 | 16.8798 | 13200 | 0.3105 | 0.9583 | 0.9585 | | 0.0139 | 17.1355 | 13400 | 0.3348 | 0.9597 | 0.9599 | | 0.0139 | 17.3913 | 13600 | 0.3290 | 0.9592 | 0.9593 | | 0.0139 | 17.6471 | 13800 | 0.3476 | 0.9587 | 0.9589 | | 0.0139 | 17.9028 | 14000 | 0.3498 | 0.9583 | 0.9584 | | 0.0131 | 18.1586 | 14200 | 0.3483 | 0.9590 | 0.9590 | | 0.0131 | 18.4143 | 14400 | 0.3386 | 0.9587 | 0.9588 | | 0.0131 | 18.6701 | 14600 | 0.3512 | 0.9581 | 0.9582 | | 0.0131 | 18.9258 | 14800 | 0.3627 | 0.9581 | 0.9582 | | 0.01 | 19.1816 | 15000 | 0.3664 | 0.9572 | 0.9574 | | 0.01 | 19.4373 | 15200 | 0.3688 | 0.9576 | 0.9578 | | 0.01 | 19.6931 | 15400 | 0.3672 | 0.9579 | 0.9580 | | 0.01 | 19.9488 | 15600 | 0.3668 | 0.9580 | 0.9581 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.1.2 - Datasets 2.19.2 - Tokenizers 0.19.1
huhuhuhus/Qwen-Qwen1.5-1.8B-1719144371
huhuhuhus
"2024-06-23T12:06:15Z"
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:Qwen/Qwen1.5-1.8B", "region:us" ]
null
"2024-06-23T12:06:11Z"
--- library_name: peft base_model: Qwen/Qwen1.5-1.8B --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.11.1
YujieRen/bert-finetuned-ner
YujieRen
"2024-06-23T12:20:41Z"
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "token-classification", "generated_from_trainer", "dataset:conll2003", "base_model:bert-base-cased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
"2024-06-23T12:07:11Z"
--- license: apache-2.0 base_model: bert-base-cased tags: - generated_from_trainer datasets: - conll2003 metrics: - precision - recall - f1 - accuracy model-index: - name: bert-finetuned-ner results: - task: name: Token Classification type: token-classification dataset: name: conll2003 type: conll2003 config: conll2003 split: validation args: conll2003 metrics: - name: Precision type: precision value: 0.936050364479788 - name: Recall type: recall value: 0.9508582968697409 - name: F1 type: f1 value: 0.9433962264150942 - name: Accuracy type: accuracy value: 0.9865632542532525 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0612 - Precision: 0.9361 - Recall: 0.9509 - F1: 0.9434 - Accuracy: 0.9866 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0774 | 1.0 | 1756 | 0.0640 | 0.9110 | 0.9376 | 0.9241 | 0.9833 | | 0.0347 | 2.0 | 3512 | 0.0669 | 0.9296 | 0.9448 | 0.9372 | 0.9849 | | 0.023 | 3.0 | 5268 | 0.0612 | 0.9361 | 0.9509 | 0.9434 | 0.9866 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
casque/00020_Swimming_Lesson_2_v1
casque
"2024-06-23T12:09:25Z"
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
"2024-06-23T12:07:58Z"
--- license: creativeml-openrail-m ---
kraftpunk97/CrackerBox-YOLO
kraftpunk97
"2024-06-23T12:18:42Z"
0
0
null
[ "en", "region:us" ]
null
"2024-06-23T12:09:32Z"
--- language: - en ---
c4ss/Meta-Llama-3-8B-Instruct-strider-10000rows
c4ss
"2024-06-23T12:10:38Z"
0
0
peft
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "license:llama3", "region:us" ]
null
"2024-06-23T12:10:34Z"
--- license: llama3 library_name: peft tags: - generated_from_trainer base_model: meta-llama/Meta-Llama-3-8B-Instruct model-index: - name: Meta-Llama-3-8B-Instruct-strider-10000rows results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Meta-Llama-3-8B-Instruct-strider-10000rows This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Framework versions - PEFT 0.10.0 - Transformers 4.40.0 - Pytorch 2.2.2 - Datasets 2.14.7 - Tokenizers 0.19.1
Casper0508/MSc_llama2_finetuned_model_secondData7
Casper0508
"2024-06-23T12:11:18Z"
0
0
peft
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "base_model:meta-llama/Llama-2-7b-chat-hf", "license:llama2", "region:us" ]
null
"2024-06-23T12:11:11Z"
--- license: llama2 base_model: meta-llama/Llama-2-7b-chat-hf tags: - generated_from_trainer model-index: - name: MSc_llama2_finetuned_model_secondData7 results: [] library_name: peft --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # MSc_llama2_finetuned_model_secondData7 This model is a fine-tuned version of [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6939 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - _load_in_8bit: False - _load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 - load_in_4bit: True - load_in_8bit: False ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.03 - training_steps: 250 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.9823 | 1.33 | 10 | 3.6153 | | 3.3928 | 2.67 | 20 | 2.9413 | | 2.6305 | 4.0 | 30 | 2.1743 | | 1.9546 | 5.33 | 40 | 1.7079 | | 1.5996 | 6.67 | 50 | 1.4500 | | 1.2984 | 8.0 | 60 | 1.1277 | | 0.9632 | 9.33 | 70 | 0.8761 | | 0.8296 | 10.67 | 80 | 0.8206 | | 0.7589 | 12.0 | 90 | 0.7735 | | 0.7063 | 13.33 | 100 | 0.7446 | | 0.671 | 14.67 | 110 | 0.7278 | | 0.6405 | 16.0 | 120 | 0.7091 | | 0.6096 | 17.33 | 130 | 0.7021 | | 0.5845 | 18.67 | 140 | 0.6986 | | 0.5697 | 20.0 | 150 | 0.6938 | | 0.5539 | 21.33 | 160 | 0.6936 | | 0.5414 | 22.67 | 170 | 0.6913 | | 0.5313 | 24.0 | 180 | 0.6920 | | 0.522 | 25.33 | 190 | 0.6919 | | 0.5168 | 26.67 | 200 | 0.6932 | | 0.5191 | 28.0 | 210 | 0.6942 | | 0.5079 | 29.33 | 220 | 0.6938 | | 0.5132 | 30.67 | 230 | 0.6939 | | 0.5085 | 32.0 | 240 | 0.6939 | | 0.5079 | 33.33 | 250 | 0.6939 | ### Framework versions - PEFT 0.4.0 - Transformers 4.38.2 - Pytorch 2.3.1+cu121 - Datasets 2.13.1 - Tokenizers 0.15.2
harshsinghr63/Ahate
harshsinghr63
"2024-06-23T12:11:53Z"
0
0
null
[ "license:apache-2.0", "region:us" ]
null
"2024-06-23T12:11:51Z"
--- license: apache-2.0 ---
AayanJaleel/Gojo
AayanJaleel
"2024-06-23T12:12:35Z"
0
0
null
[ "license:apache-2.0", "region:us" ]
null
"2024-06-23T12:12:35Z"
--- license: apache-2.0 ---
KafkaSuper/ka
KafkaSuper
"2024-06-24T09:24:12Z"
0
0
null
[ "license:openrail", "region:us" ]
null
"2024-06-23T12:13:13Z"
--- license: openrail ---
jmaczan/rick-and-morty-gpt
jmaczan
"2024-06-23T12:20:04Z"
0
0
null
[ "license:gpl-3.0", "region:us" ]
null
"2024-06-23T12:15:04Z"
--- license: gpl-3.0 --- Run model with [this GPT implementation](https://github.com/jmaczan/gpt/) ```py python src/run.py --from-checkpoint checkpoint_path.pth ``` Resume training ```py python src/train.py --from-checkpoint checkpoint_path.pth ```
Binboy/Carjan
Binboy
"2024-06-23T12:16:15Z"
0
0
null
[ "license:openrail", "region:us" ]
null
"2024-06-23T12:16:15Z"
--- license: openrail ---
SwimChoi/villama2-7b-chat-Poland-lora
SwimChoi
"2024-06-23T12:17:23Z"
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:meta-llama/Llama-2-7b-chat-hf", "region:us" ]
null
"2024-06-23T12:17:18Z"
--- library_name: peft base_model: meta-llama/Llama-2-7b-chat-hf --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.10.1.dev0
SwimChoi/villama2-7b-chat-Kosovo-lora
SwimChoi
"2024-06-23T12:18:41Z"
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:meta-llama/Llama-2-7b-chat-hf", "region:us" ]
null
"2024-06-23T12:18:38Z"
--- library_name: peft base_model: meta-llama/Llama-2-7b-chat-hf --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.10.1.dev0
user87441257/Reinforce-Pixelcopter-PLE-v0
user87441257
"2024-06-23T12:24:22Z"
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
"2024-06-23T12:20:53Z"
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-Pixelcopter-PLE-v0 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 26.20 +/- 18.67 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
itay-nakash/model_cb2b1e6d90_sweep_effortless-wildflower-865
itay-nakash
"2024-06-23T12:21:23Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T12:21:23Z"
Entry not found
oz1115/roberta-large-peft-p-tuning
oz1115
"2024-06-23T12:22:11Z"
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-06-23T12:22:07Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
itay-nakash/model_6c19c2b8b0_sweep_glamorous-galaxy-866
itay-nakash
"2024-06-23T12:22:34Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T12:22:34Z"
Entry not found
itay-nakash/model_71dd0b85f5_sweep_silvery-bird-867
itay-nakash
"2024-06-23T12:22:42Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T12:22:42Z"
Entry not found
itay-nakash/model_0b8bff813c_sweep_wild-sea-870
itay-nakash
"2024-06-23T12:23:28Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T12:23:28Z"
Entry not found
itay-nakash/model_2ec771cb72_sweep_classic-monkey-869
itay-nakash
"2024-06-23T12:23:36Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T12:23:36Z"
Entry not found
itay-nakash/model_6d5c5a99e5_sweep_crisp-pyramid-868
itay-nakash
"2024-06-23T12:23:38Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T12:23:38Z"
Entry not found
odelz/hindi_fb1mms_balancedv2
odelz
"2024-06-23T12:25:14Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T12:25:14Z"
Entry not found
itay-nakash/model_9539ee4e06_sweep_glamorous-armadillo-871
itay-nakash
"2024-06-23T12:25:18Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T12:25:18Z"
Entry not found
itay-nakash/model_3f5c893599_sweep_dulcet-river-872
itay-nakash
"2024-06-23T12:25:25Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T12:25:25Z"
Entry not found
itay-nakash/model_fb5a361adf_sweep_charmed-fog-874
itay-nakash
"2024-06-23T12:28:08Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T12:28:08Z"
Entry not found
itay-nakash/model_47b4c49ddb_sweep_stoic-jazz-873
itay-nakash
"2024-06-23T12:28:10Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T12:28:10Z"
Entry not found
Mikeshu/photosession
Mikeshu
"2024-06-23T12:29:19Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T12:29:19Z"
Entry not found
OpenCitiesApp/transformers
OpenCitiesApp
"2024-06-23T12:31:09Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T12:31:09Z"
Entry not found
Padmanthan/JiuZhang3.0-Corpus-SFT
Padmanthan
"2024-06-23T12:31:10Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T12:31:10Z"
Entry not found
Binboy/Ghtyt
Binboy
"2024-06-26T01:45:09Z"
0
0
null
[ "license:openrail", "region:us" ]
null
"2024-06-23T12:32:24Z"
--- license: openrail ---
itay-nakash/model_e4ad58a464_sweep_visionary-violet-875
itay-nakash
"2024-06-23T12:32:52Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T12:32:52Z"
Entry not found
camilomj/MJDANGEROUSERA
camilomj
"2024-06-23T12:34:47Z"
0
0
null
[ "license:apache-2.0", "region:us" ]
null
"2024-06-23T12:33:39Z"
--- license: apache-2.0 ---
itay-nakash/model_0b8bff813c_sweep_prime-grass-876
itay-nakash
"2024-06-23T12:38:04Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T12:38:04Z"
Entry not found
zoltanbege/example-model
zoltanbege
"2024-06-24T11:32:00Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T12:38:06Z"
# Example Model This is my model card README. --- license: mit ---
itay-nakash/model_2ec771cb72_sweep_crisp-moon-877
itay-nakash
"2024-06-23T12:38:12Z"
0
0
null
[ "region:us" ]
null
"2024-06-23T12:38:12Z"
Entry not found
ikedachin/bert-base-uncased-issues-128
ikedachin
"2024-06-23T17:17:51Z"
0
0
transformers
[ "transformers", "safetensors", "bert", "fill-mask", "generated_from_trainer", "base_model:bert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
"2024-06-23T12:38:12Z"
--- license: apache-2.0 base_model: bert-base-uncased tags: - generated_from_trainer model-index: - name: bert-base-uncased-issues-128 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-issues-128 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.2425 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 16 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.1056 | 1.0 | 291 | 1.6941 | | 1.6321 | 2.0 | 582 | 1.5138 | | 1.495 | 3.0 | 873 | 1.3614 | | 1.393 | 4.0 | 1164 | 1.3305 | | 1.3288 | 5.0 | 1455 | 1.2294 | | 1.2828 | 6.0 | 1746 | 1.3679 | | 1.2314 | 7.0 | 2037 | 1.2946 | | 1.2028 | 8.0 | 2328 | 1.3472 | | 1.1671 | 9.0 | 2619 | 1.2308 | | 1.1402 | 10.0 | 2910 | 1.1784 | | 1.1281 | 11.0 | 3201 | 1.1330 | | 1.108 | 12.0 | 3492 | 1.1885 | | 1.0876 | 13.0 | 3783 | 1.2176 | | 1.0757 | 14.0 | 4074 | 1.2072 | | 1.0729 | 15.0 | 4365 | 1.2215 | | 1.0639 | 16.0 | 4656 | 1.2425 | ### Framework versions - Transformers 4.41.0 - Pytorch 2.3.1 - Datasets 2.19.1 - Tokenizers 0.19.1
starnet/02-star21-06-23-01
starnet
"2024-06-23T13:22:34Z"
0
0
null
[ "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
null
"2024-06-23T12:38:30Z"
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).