pipeline_tag
stringclasses 48
values | library_name
stringclasses 198
values | text
stringlengths 1
900k
| metadata
stringlengths 2
438k
| id
stringlengths 5
122
| last_modified
null | tags
sequencelengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
| arxiv
sequencelengths 0
201
| languages
sequencelengths 0
1.83k
| tags_str
stringlengths 17
9.34k
| text_str
stringlengths 0
389k
| text_lists
sequencelengths 0
722
| processed_texts
sequencelengths 1
723
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fill-mask | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | CLASS-MATE/BERT-MLM-multilingual-cased | null | [
"transformers",
"safetensors",
"bert",
"fill-mask",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T16:03:20+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #bert #fill-mask #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #bert #fill-mask #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H4-seqsight_65536_512_47M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_EMP_H4](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H4) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2880
- F1 Score: 0.8934
- Accuracy: 0.8932
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.3756 | 2.17 | 200 | 0.2960 | 0.8851 | 0.8850 |
| 0.2905 | 4.35 | 400 | 0.3002 | 0.8839 | 0.8836 |
| 0.2784 | 6.52 | 600 | 0.2925 | 0.8883 | 0.8884 |
| 0.2756 | 8.7 | 800 | 0.3025 | 0.8826 | 0.8823 |
| 0.2605 | 10.87 | 1000 | 0.2880 | 0.8905 | 0.8905 |
| 0.2556 | 13.04 | 1200 | 0.2872 | 0.8961 | 0.8960 |
| 0.2488 | 15.22 | 1400 | 0.2880 | 0.8949 | 0.8946 |
| 0.2435 | 17.39 | 1600 | 0.2970 | 0.8908 | 0.8905 |
| 0.2394 | 19.57 | 1800 | 0.2875 | 0.8992 | 0.8994 |
| 0.2346 | 21.74 | 2000 | 0.2896 | 0.8962 | 0.8960 |
| 0.2279 | 23.91 | 2200 | 0.3006 | 0.8921 | 0.8919 |
| 0.2257 | 26.09 | 2400 | 0.2989 | 0.8874 | 0.8871 |
| 0.2218 | 28.26 | 2600 | 0.2959 | 0.8921 | 0.8919 |
| 0.2164 | 30.43 | 2800 | 0.2968 | 0.8907 | 0.8905 |
| 0.215 | 32.61 | 3000 | 0.2993 | 0.9015 | 0.9014 |
| 0.2115 | 34.78 | 3200 | 0.3025 | 0.8913 | 0.8912 |
| 0.2074 | 36.96 | 3400 | 0.3011 | 0.8981 | 0.8980 |
| 0.2052 | 39.13 | 3600 | 0.3075 | 0.8901 | 0.8898 |
| 0.2008 | 41.3 | 3800 | 0.3079 | 0.8934 | 0.8932 |
| 0.1992 | 43.48 | 4000 | 0.3171 | 0.8913 | 0.8912 |
| 0.1985 | 45.65 | 4200 | 0.3199 | 0.8858 | 0.8857 |
| 0.1912 | 47.83 | 4400 | 0.3155 | 0.8891 | 0.8891 |
| 0.1905 | 50.0 | 4600 | 0.3100 | 0.8940 | 0.8939 |
| 0.19 | 52.17 | 4800 | 0.3164 | 0.8884 | 0.8884 |
| 0.1915 | 54.35 | 5000 | 0.3157 | 0.8935 | 0.8932 |
| 0.185 | 56.52 | 5200 | 0.3235 | 0.8887 | 0.8884 |
| 0.1806 | 58.7 | 5400 | 0.3242 | 0.8900 | 0.8898 |
| 0.1809 | 60.87 | 5600 | 0.3224 | 0.8880 | 0.8877 |
| 0.1787 | 63.04 | 5800 | 0.3286 | 0.8866 | 0.8864 |
| 0.1788 | 65.22 | 6000 | 0.3372 | 0.8859 | 0.8857 |
| 0.1762 | 67.39 | 6200 | 0.3454 | 0.8779 | 0.8775 |
| 0.1732 | 69.57 | 6400 | 0.3405 | 0.8826 | 0.8823 |
| 0.171 | 71.74 | 6600 | 0.3395 | 0.8914 | 0.8912 |
| 0.1726 | 73.91 | 6800 | 0.3427 | 0.8873 | 0.8871 |
| 0.169 | 76.09 | 7000 | 0.3593 | 0.8820 | 0.8816 |
| 0.1688 | 78.26 | 7200 | 0.3436 | 0.8846 | 0.8843 |
| 0.1674 | 80.43 | 7400 | 0.3411 | 0.8900 | 0.8898 |
| 0.1641 | 82.61 | 7600 | 0.3489 | 0.8892 | 0.8891 |
| 0.1646 | 84.78 | 7800 | 0.3523 | 0.8887 | 0.8884 |
| 0.1646 | 86.96 | 8000 | 0.3448 | 0.8899 | 0.8898 |
| 0.1629 | 89.13 | 8200 | 0.3592 | 0.8846 | 0.8843 |
| 0.1632 | 91.3 | 8400 | 0.3495 | 0.8865 | 0.8864 |
| 0.1578 | 93.48 | 8600 | 0.3543 | 0.8893 | 0.8891 |
| 0.1598 | 95.65 | 8800 | 0.3551 | 0.8878 | 0.8877 |
| 0.1612 | 97.83 | 9000 | 0.3517 | 0.8886 | 0.8884 |
| 0.163 | 100.0 | 9200 | 0.3541 | 0.8846 | 0.8843 |
| 0.163 | 102.17 | 9400 | 0.3523 | 0.8886 | 0.8884 |
| 0.1584 | 104.35 | 9600 | 0.3553 | 0.8880 | 0.8877 |
| 0.1561 | 106.52 | 9800 | 0.3557 | 0.8865 | 0.8864 |
| 0.1564 | 108.7 | 10000 | 0.3554 | 0.8886 | 0.8884 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_EMP_H4-seqsight_65536_512_47M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H4-seqsight_65536_512_47M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_65536_512_47M",
"region:us"
] | null | 2024-05-03T16:03:23+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us
| GUE\_EMP\_H4-seqsight\_65536\_512\_47M-L8\_f
============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_65536\_512\_47M on the mahdibaghbanzadeh/GUE\_EMP\_H4 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2880
* F1 Score: 0.8934
* Accuracy: 0.8932
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 0.0001_withdpo_4iters_bs256_5102lr_misit_iter_2
This model is a fine-tuned version of [ShenaoZ/0.0001_withdpo_4iters_bs256_5102lr_misit_iter_1](https://huggingface.co/ShenaoZ/0.0001_withdpo_4iters_bs256_5102lr_misit_iter_1) on the updated and the original datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2
| {"license": "mit", "tags": ["alignment-handbook", "generated_from_trainer", "trl", "dpo", "generated_from_trainer"], "datasets": ["updated", "original"], "base_model": "ShenaoZ/0.0001_withdpo_4iters_bs256_5102lr_misit_iter_1", "model-index": [{"name": "0.0001_withdpo_4iters_bs256_5102lr_misit_iter_2", "results": []}]} | ShenaoZ/0.0001_withdpo_4iters_bs256_5102lr_misit_iter_2 | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"dataset:updated",
"dataset:original",
"base_model:ShenaoZ/0.0001_withdpo_4iters_bs256_5102lr_misit_iter_1",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-03T16:03:29+00:00 | [] | [] | TAGS
#transformers #safetensors #mistral #text-generation #alignment-handbook #generated_from_trainer #trl #dpo #conversational #dataset-updated #dataset-original #base_model-ShenaoZ/0.0001_withdpo_4iters_bs256_5102lr_misit_iter_1 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# 0.0001_withdpo_4iters_bs256_5102lr_misit_iter_2
This model is a fine-tuned version of ShenaoZ/0.0001_withdpo_4iters_bs256_5102lr_misit_iter_1 on the updated and the original datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2
| [
"# 0.0001_withdpo_4iters_bs256_5102lr_misit_iter_2\n\nThis model is a fine-tuned version of ShenaoZ/0.0001_withdpo_4iters_bs256_5102lr_misit_iter_1 on the updated and the original datasets.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 256\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.36.2\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #alignment-handbook #generated_from_trainer #trl #dpo #conversational #dataset-updated #dataset-original #base_model-ShenaoZ/0.0001_withdpo_4iters_bs256_5102lr_misit_iter_1 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# 0.0001_withdpo_4iters_bs256_5102lr_misit_iter_2\n\nThis model is a fine-tuned version of ShenaoZ/0.0001_withdpo_4iters_bs256_5102lr_misit_iter_1 on the updated and the original datasets.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 256\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.36.2\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.15.2"
] |
null | transformers |
# Uploaded model
- **Developed by:** animaRegem
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-2b-it-bnb-4bit
This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "gemma", "trl"], "base_model": "unsloth/gemma-2b-it-bnb-4bit"} | animaRegem/gemma-2b-it-lora-0_1-malayalam | null | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma",
"trl",
"en",
"base_model:unsloth/gemma-2b-it-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T16:04:03+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #text-generation-inference #unsloth #gemma #trl #en #base_model-unsloth/gemma-2b-it-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: animaRegem
- License: apache-2.0
- Finetuned from model : unsloth/gemma-2b-it-bnb-4bit
This gemma model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: animaRegem\n- License: apache-2.0\n- Finetuned from model : unsloth/gemma-2b-it-bnb-4bit\n\nThis gemma model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #safetensors #text-generation-inference #unsloth #gemma #trl #en #base_model-unsloth/gemma-2b-it-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: animaRegem\n- License: apache-2.0\n- Finetuned from model : unsloth/gemma-2b-it-bnb-4bit\n\nThis gemma model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H4-seqsight_65536_512_47M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_EMP_H4](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H4) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2723
- F1 Score: 0.8953
- Accuracy: 0.8953
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.3593 | 2.17 | 200 | 0.2935 | 0.8873 | 0.8871 |
| 0.2808 | 4.35 | 400 | 0.2879 | 0.8960 | 0.8960 |
| 0.2632 | 6.52 | 600 | 0.2894 | 0.8888 | 0.8891 |
| 0.252 | 8.7 | 800 | 0.2906 | 0.8888 | 0.8884 |
| 0.2352 | 10.87 | 1000 | 0.2793 | 0.9110 | 0.9110 |
| 0.2293 | 13.04 | 1200 | 0.2952 | 0.8901 | 0.8898 |
| 0.2172 | 15.22 | 1400 | 0.2890 | 0.8948 | 0.8946 |
| 0.2113 | 17.39 | 1600 | 0.3144 | 0.8909 | 0.8905 |
| 0.2004 | 19.57 | 1800 | 0.3055 | 0.8945 | 0.8946 |
| 0.1942 | 21.74 | 2000 | 0.3162 | 0.8907 | 0.8905 |
| 0.1835 | 23.91 | 2200 | 0.3497 | 0.8696 | 0.8693 |
| 0.1786 | 26.09 | 2400 | 0.3230 | 0.8819 | 0.8816 |
| 0.1698 | 28.26 | 2600 | 0.3381 | 0.8858 | 0.8857 |
| 0.1611 | 30.43 | 2800 | 0.3506 | 0.8852 | 0.8850 |
| 0.1532 | 32.61 | 3000 | 0.3809 | 0.8799 | 0.8802 |
| 0.1489 | 34.78 | 3200 | 0.3671 | 0.8791 | 0.8789 |
| 0.1385 | 36.96 | 3400 | 0.3798 | 0.8786 | 0.8782 |
| 0.1347 | 39.13 | 3600 | 0.3871 | 0.8758 | 0.8754 |
| 0.1278 | 41.3 | 3800 | 0.4102 | 0.8761 | 0.8761 |
| 0.1241 | 43.48 | 4000 | 0.4262 | 0.8790 | 0.8789 |
| 0.1173 | 45.65 | 4200 | 0.4611 | 0.8715 | 0.8720 |
| 0.1122 | 47.83 | 4400 | 0.4375 | 0.8797 | 0.8795 |
| 0.11 | 50.0 | 4600 | 0.4266 | 0.8786 | 0.8789 |
| 0.1039 | 52.17 | 4800 | 0.4801 | 0.8736 | 0.8734 |
| 0.1057 | 54.35 | 5000 | 0.4509 | 0.8775 | 0.8775 |
| 0.0953 | 56.52 | 5200 | 0.4760 | 0.8717 | 0.8713 |
| 0.0926 | 58.7 | 5400 | 0.5029 | 0.8683 | 0.8679 |
| 0.0903 | 60.87 | 5600 | 0.4814 | 0.8722 | 0.8720 |
| 0.0863 | 63.04 | 5800 | 0.5023 | 0.8729 | 0.8727 |
| 0.0856 | 65.22 | 6000 | 0.5227 | 0.8670 | 0.8665 |
| 0.0833 | 67.39 | 6200 | 0.5262 | 0.8677 | 0.8672 |
| 0.0783 | 69.57 | 6400 | 0.5150 | 0.8695 | 0.8693 |
| 0.0761 | 71.74 | 6600 | 0.5296 | 0.8734 | 0.8734 |
| 0.0727 | 73.91 | 6800 | 0.5547 | 0.8704 | 0.8700 |
| 0.0705 | 76.09 | 7000 | 0.5961 | 0.8663 | 0.8658 |
| 0.0718 | 78.26 | 7200 | 0.5728 | 0.8608 | 0.8604 |
| 0.0666 | 80.43 | 7400 | 0.5711 | 0.8695 | 0.8693 |
| 0.0657 | 82.61 | 7600 | 0.5681 | 0.8652 | 0.8652 |
| 0.0638 | 84.78 | 7800 | 0.5880 | 0.8697 | 0.8693 |
| 0.0616 | 86.96 | 8000 | 0.5926 | 0.8695 | 0.8693 |
| 0.0638 | 89.13 | 8200 | 0.5964 | 0.8641 | 0.8638 |
| 0.0638 | 91.3 | 8400 | 0.5819 | 0.8708 | 0.8706 |
| 0.0594 | 93.48 | 8600 | 0.5993 | 0.8680 | 0.8679 |
| 0.0574 | 95.65 | 8800 | 0.5968 | 0.8675 | 0.8672 |
| 0.0586 | 97.83 | 9000 | 0.5952 | 0.8640 | 0.8638 |
| 0.0584 | 100.0 | 9200 | 0.6028 | 0.8614 | 0.8611 |
| 0.0583 | 102.17 | 9400 | 0.6088 | 0.8640 | 0.8638 |
| 0.0575 | 104.35 | 9600 | 0.6062 | 0.8682 | 0.8679 |
| 0.0576 | 106.52 | 9800 | 0.6077 | 0.8668 | 0.8665 |
| 0.0553 | 108.7 | 10000 | 0.6073 | 0.8667 | 0.8665 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_EMP_H4-seqsight_65536_512_47M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H4-seqsight_65536_512_47M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_65536_512_47M",
"region:us"
] | null | 2024-05-03T16:04:04+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us
| GUE\_EMP\_H4-seqsight\_65536\_512\_47M-L32\_f
=============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_65536\_512\_47M on the mahdibaghbanzadeh/GUE\_EMP\_H4 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2723
* F1 Score: 0.8953
* Accuracy: 0.8953
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text-generation | transformers |
# Llama-3-8B-Irene-v0.1
<img src="https://huggingface.co/Virt-io/Llama-3-8B-Irene-v0.1/resolve/main/Irene.png">
[SillyTavern Presest](https://huggingface.co/Virt-io/SillyTavern-Presets)
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [NeverSleep/Llama-3-Lumimaid-8B-v0.1](https://huggingface.co/NeverSleep/Llama-3-Lumimaid-8B-v0.1) as a base.
### Models Merged
The following models were included in the merge:
* [ResplendentAI/SOVL_Llama3_8B](https://huggingface.co/ResplendentAI/SOVL_Llama3_8B)
* [ChaoticNeutrals/Poppy_Porpoise-0.72-L3-8B](https://huggingface.co/ChaoticNeutrals/Poppy_Porpoise-0.72-L3-8B)
* [cgato/L3-TheSpice-8b-v0.8.3](https://huggingface.co/cgato/L3-TheSpice-8b-v0.8.3)
* [Endevor/InfinityRP-v2-8B](https://huggingface.co/Endevor/InfinityRP-v2-8B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: NeverSleep/Llama-3-Lumimaid-8B-v0.1
- model: ResplendentAI/SOVL_Llama3_8B
parameters:
density: 0.58
weight: [0.15, 0.1, 0.1, 0.33]
- model: Endevor/InfinityRP-v2-8B
parameters:
density: 0.58
weight: [0.15, 0.1, 0.1, 0.25]
- model: ChaoticNeutrals/Poppy_Porpoise-0.72-L3-8B
parameters:
density: 0.66
weight: [0.20, 0.35, 0.25, 0.25]
- model: cgato/L3-TheSpice-8b-v0.8.3
parameters:
density: 0.66
weight: [0.20, 0.15, 0.15, 0.25]
merge_method: dare_ties
base_model: NeverSleep/Llama-3-Lumimaid-8B-v0.1
parameters:
normalize: true
int8_mask: true
dtype: bfloat16
```
| {"license": "other", "library_name": "transformers", "tags": ["mergekit", "merge", "facebook", "meta", "pytorch", "llama", "llama-3"], "base_model": ["ResplendentAI/SOVL_Llama3_8B", "ChaoticNeutrals/Poppy_Porpoise-0.72-L3-8B", "NeverSleep/Llama-3-Lumimaid-8B-v0.1", "cgato/L3-TheSpice-8b-v0.8.3", "Endevor/InfinityRP-v2-8B"], "pipeline_tag": "text-generation", "license_name": "llama3", "license_link": "LICENSE"} | Virt-io/Llama-3-8B-Irene-v0.1 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"facebook",
"meta",
"pytorch",
"llama-3",
"conversational",
"arxiv:2311.03099",
"arxiv:2306.01708",
"base_model:ResplendentAI/SOVL_Llama3_8B",
"base_model:ChaoticNeutrals/Poppy_Porpoise-0.72-L3-8B",
"base_model:NeverSleep/Llama-3-Lumimaid-8B-v0.1",
"base_model:cgato/L3-TheSpice-8b-v0.8.3",
"base_model:Endevor/InfinityRP-v2-8B",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-03T16:04:05+00:00 | [
"2311.03099",
"2306.01708"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #mergekit #merge #facebook #meta #pytorch #llama-3 #conversational #arxiv-2311.03099 #arxiv-2306.01708 #base_model-ResplendentAI/SOVL_Llama3_8B #base_model-ChaoticNeutrals/Poppy_Porpoise-0.72-L3-8B #base_model-NeverSleep/Llama-3-Lumimaid-8B-v0.1 #base_model-cgato/L3-TheSpice-8b-v0.8.3 #base_model-Endevor/InfinityRP-v2-8B #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Llama-3-8B-Irene-v0.1
<img src="URL
SillyTavern Presest
This is a merge of pre-trained language models created using mergekit.
## Merge Details
### Merge Method
This model was merged using the DARE TIES merge method using NeverSleep/Llama-3-Lumimaid-8B-v0.1 as a base.
### Models Merged
The following models were included in the merge:
* ResplendentAI/SOVL_Llama3_8B
* ChaoticNeutrals/Poppy_Porpoise-0.72-L3-8B
* cgato/L3-TheSpice-8b-v0.8.3
* Endevor/InfinityRP-v2-8B
### Configuration
The following YAML configuration was used to produce this model:
| [
"# Llama-3-8B-Irene-v0.1\n\n<img src=\"URL\n\nSillyTavern Presest\n\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the DARE TIES merge method using NeverSleep/Llama-3-Lumimaid-8B-v0.1 as a base.",
"### Models Merged\n\nThe following models were included in the merge:\n* ResplendentAI/SOVL_Llama3_8B\n* ChaoticNeutrals/Poppy_Porpoise-0.72-L3-8B\n* cgato/L3-TheSpice-8b-v0.8.3\n* Endevor/InfinityRP-v2-8B",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #mergekit #merge #facebook #meta #pytorch #llama-3 #conversational #arxiv-2311.03099 #arxiv-2306.01708 #base_model-ResplendentAI/SOVL_Llama3_8B #base_model-ChaoticNeutrals/Poppy_Porpoise-0.72-L3-8B #base_model-NeverSleep/Llama-3-Lumimaid-8B-v0.1 #base_model-cgato/L3-TheSpice-8b-v0.8.3 #base_model-Endevor/InfinityRP-v2-8B #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Llama-3-8B-Irene-v0.1\n\n<img src=\"URL\n\nSillyTavern Presest\n\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the DARE TIES merge method using NeverSleep/Llama-3-Lumimaid-8B-v0.1 as a base.",
"### Models Merged\n\nThe following models were included in the merge:\n* ResplendentAI/SOVL_Llama3_8B\n* ChaoticNeutrals/Poppy_Porpoise-0.72-L3-8B\n* cgato/L3-TheSpice-8b-v0.8.3\n* Endevor/InfinityRP-v2-8B",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": ["unsloth"]} | animaRegem/gemma-2b-it-lora-0_1-malayalam-tokenizer | null | [
"transformers",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T16:04:11+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #unsloth #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #unsloth #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers | # Gille/StrangeMerges_16-7B-slerp AWQ
- Model creator: [Gille](https://huggingface.co/Gille)
- Original model: [StrangeMerges_16-7B-slerp](https://huggingface.co/Gille/StrangeMerges_16-7B-slerp)
## How to use
### Install the necessary packages
```bash
pip install --upgrade autoawq autoawq-kernels
```
### Example Python code
```python
from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer, TextStreamer
model_path = "solidrust/StrangeMerges_16-7B-slerp-AWQ"
system_message = "You are StrangeMerges_16-7B-slerp, incarnated as a powerful AI. You were created by Gille."
# Load model
model = AutoAWQForCausalLM.from_quantized(model_path,
fuse_layers=True)
tokenizer = AutoTokenizer.from_pretrained(model_path,
trust_remote_code=True)
streamer = TextStreamer(tokenizer,
skip_prompt=True,
skip_special_tokens=True)
# Convert prompt to tokens
prompt_template = """\
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant"""
prompt = "You're standing on the surface of the Earth. "\
"You walk one mile south, one mile west and one mile north. "\
"You end up exactly where you started. Where are you?"
tokens = tokenizer(prompt_template.format(system_message=system_message,prompt=prompt),
return_tensors='pt').input_ids.cuda()
# Generate output
generation_output = model.generate(tokens,
streamer=streamer,
max_new_tokens=512)
```
### About AWQ
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.
It is supported by:
- [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ
- [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types.
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
| {"library_name": "transformers", "tags": ["4-bit", "AWQ", "text-generation", "autotrain_compatible", "endpoints_compatible"], "pipeline_tag": "text-generation", "inference": false, "quantized_by": "Suparious"} | solidrust/StrangeMerges_16-7B-slerp-AWQ | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"4-bit",
"AWQ",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-03T16:05:18+00:00 | [] | [] | TAGS
#transformers #safetensors #mistral #text-generation #4-bit #AWQ #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| # Gille/StrangeMerges_16-7B-slerp AWQ
- Model creator: Gille
- Original model: StrangeMerges_16-7B-slerp
## How to use
### Install the necessary packages
### Example Python code
### About AWQ
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.
It is supported by:
- Text Generation Webui - using Loader: AutoAWQ
- vLLM - version 0.2.2 or later for support for all model types.
- Hugging Face Text Generation Inference (TGI)
- Transformers version 4.35.0 and later, from any code or client that supports Transformers
- AutoAWQ - for use from Python code
| [
"# Gille/StrangeMerges_16-7B-slerp AWQ\n\n- Model creator: Gille\n- Original model: StrangeMerges_16-7B-slerp",
"## How to use",
"### Install the necessary packages",
"### Example Python code",
"### About AWQ\n\nAWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.\n\nAWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.\n\nIt is supported by:\n\n- Text Generation Webui - using Loader: AutoAWQ\n- vLLM - version 0.2.2 or later for support for all model types.\n- Hugging Face Text Generation Inference (TGI)\n- Transformers version 4.35.0 and later, from any code or client that supports Transformers\n- AutoAWQ - for use from Python code"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #4-bit #AWQ #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Gille/StrangeMerges_16-7B-slerp AWQ\n\n- Model creator: Gille\n- Original model: StrangeMerges_16-7B-slerp",
"## How to use",
"### Install the necessary packages",
"### Example Python code",
"### About AWQ\n\nAWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.\n\nAWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.\n\nIt is supported by:\n\n- Text Generation Webui - using Loader: AutoAWQ\n- vLLM - version 0.2.2 or later for support for all model types.\n- Hugging Face Text Generation Inference (TGI)\n- Transformers version 4.35.0 and later, from any code or client that supports Transformers\n- AutoAWQ - for use from Python code"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lora_fine_tuned_copa
This model is a fine-tuned version of [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6918
- Accuracy: 0.46
- F1: 0.4570
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 400
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.7088 | 1.0 | 50 | 0.6921 | 0.48 | 0.48 |
| 0.7024 | 2.0 | 100 | 0.6922 | 0.49 | 0.4894 |
| 0.6993 | 3.0 | 150 | 0.6921 | 0.46 | 0.4587 |
| 0.7005 | 4.0 | 200 | 0.6920 | 0.48 | 0.4788 |
| 0.6989 | 5.0 | 250 | 0.6919 | 0.47 | 0.4679 |
| 0.7018 | 6.0 | 300 | 0.6919 | 0.46 | 0.4570 |
| 0.6943 | 7.0 | 350 | 0.6919 | 0.46 | 0.4570 |
| 0.6943 | 8.0 | 400 | 0.6918 | 0.46 | 0.4570 |
### Framework versions
- PEFT 0.10.1.dev0
- Transformers 4.40.1
- Pytorch 2.3.0
- Datasets 2.19.0
- Tokenizers 0.19.1 | {"license": "apache-2.0", "library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "base_model": "google-bert/bert-base-uncased", "model-index": [{"name": "lora_fine_tuned_copa", "results": []}]} | lenatr99/lora_fine_tuned_copa | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"license:apache-2.0",
"region:us"
] | null | 2024-05-03T16:08:08+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-google-bert/bert-base-uncased #license-apache-2.0 #region-us
| lora\_fine\_tuned\_copa
=======================
This model is a fine-tuned version of google-bert/bert-base-uncased on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.6918
* Accuracy: 0.46
* F1: 0.4570
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 400
### Training results
### Framework versions
* PEFT 0.10.1.dev0
* Transformers 4.40.1
* Pytorch 2.3.0
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 400",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.1.dev0\n* Transformers 4.40.1\n* Pytorch 2.3.0\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-google-bert/bert-base-uncased #license-apache-2.0 #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 400",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.1.dev0\n* Transformers 4.40.1\n* Pytorch 2.3.0\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_billsum_model
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3247
- Rouge1: 0.1978
- Rouge2: 0.099
- Rougel: 0.1684
- Rougelsum: 0.1682
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 62 | 2.3922 | 0.1587 | 0.0648 | 0.1323 | 0.1323 | 19.0 |
| No log | 2.0 | 124 | 2.3515 | 0.1894 | 0.0903 | 0.1611 | 0.1607 | 19.0 |
| No log | 3.0 | 186 | 2.3310 | 0.1968 | 0.0983 | 0.1679 | 0.1675 | 19.0 |
| No log | 4.0 | 248 | 2.3247 | 0.1978 | 0.099 | 0.1684 | 0.1682 | 19.0 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["rouge"], "base_model": "t5-small", "model-index": [{"name": "my_awesome_billsum_model", "results": []}]} | ngthanhlong089/my_awesome_billsum_model | null | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:t5-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-03T16:08:12+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-t5-small #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| my\_awesome\_billsum\_model
===========================
This model is a fine-tuned version of t5-small on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 2.3247
* Rouge1: 0.1978
* Rouge2: 0.099
* Rougel: 0.1684
* Rougelsum: 0.1682
* Gen Len: 19.0
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 4
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.40.1
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 4\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-t5-small #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 4\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | golf2248/art9e97 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-03T16:08:35+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | cilantro9246/mkon0fy | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-03T16:11:04+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta_mrqa_v2
This model is a fine-tuned version of [FacebookAI/roberta-base](https://huggingface.co/FacebookAI/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3580
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.1229 | 1.0 | 967 | 1.3499 |
| 1.1979 | 2.0 | 1934 | 1.3192 |
| 0.9852 | 3.0 | 2901 | 1.3580 |
### Framework versions
- Transformers 4.41.0.dev0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "mit", "tags": ["generated_from_trainer"], "base_model": "FacebookAI/roberta-base", "model-index": [{"name": "roberta_mrqa_v2", "results": []}]} | enriquesaou/roberta_mrqa_v2 | null | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"question-answering",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T16:11:49+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #roberta #question-answering #generated_from_trainer #base_model-FacebookAI/roberta-base #license-mit #endpoints_compatible #region-us
| roberta\_mrqa\_v2
=================
This model is a fine-tuned version of FacebookAI/roberta-base on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 1.3580
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 24
* eval\_batch\_size: 24
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.41.0.dev0
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 24\n* eval\\_batch\\_size: 24\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.41.0.dev0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #roberta #question-answering #generated_from_trainer #base_model-FacebookAI/roberta-base #license-mit #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 24\n* eval\\_batch\\_size: 24\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.41.0.dev0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | jeongmi/solar_text | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T16:12:37+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3-seqsight_65536_512_47M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_EMP_H3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3391
- F1 Score: 0.8676
- Accuracy: 0.8677
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.4898 | 2.13 | 200 | 0.4183 | 0.8196 | 0.8196 |
| 0.3827 | 4.26 | 400 | 0.4199 | 0.8255 | 0.8263 |
| 0.3609 | 6.38 | 600 | 0.4122 | 0.8241 | 0.8250 |
| 0.3487 | 8.51 | 800 | 0.3864 | 0.8301 | 0.8303 |
| 0.3376 | 10.64 | 1000 | 0.3927 | 0.8282 | 0.8290 |
| 0.3246 | 12.77 | 1200 | 0.3862 | 0.8331 | 0.8337 |
| 0.3197 | 14.89 | 1400 | 0.3668 | 0.8402 | 0.8403 |
| 0.3126 | 17.02 | 1600 | 0.3676 | 0.8407 | 0.8410 |
| 0.3023 | 19.15 | 1800 | 0.4137 | 0.8265 | 0.8277 |
| 0.299 | 21.28 | 2000 | 0.3796 | 0.8378 | 0.8383 |
| 0.2968 | 23.4 | 2200 | 0.3519 | 0.8523 | 0.8524 |
| 0.2882 | 25.53 | 2400 | 0.3784 | 0.8386 | 0.8390 |
| 0.2879 | 27.66 | 2600 | 0.3634 | 0.8487 | 0.8490 |
| 0.2888 | 29.79 | 2800 | 0.3759 | 0.8458 | 0.8464 |
| 0.2845 | 31.91 | 3000 | 0.3722 | 0.8452 | 0.8457 |
| 0.2801 | 34.04 | 3200 | 0.3733 | 0.8446 | 0.8450 |
| 0.2778 | 36.17 | 3400 | 0.3864 | 0.8438 | 0.8444 |
| 0.2752 | 38.3 | 3600 | 0.3656 | 0.8568 | 0.8570 |
| 0.2773 | 40.43 | 3800 | 0.3683 | 0.8527 | 0.8530 |
| 0.2727 | 42.55 | 4000 | 0.3568 | 0.8576 | 0.8577 |
| 0.2711 | 44.68 | 4200 | 0.3499 | 0.8596 | 0.8597 |
| 0.2708 | 46.81 | 4400 | 0.3751 | 0.8479 | 0.8484 |
| 0.2718 | 48.94 | 4600 | 0.3834 | 0.8463 | 0.8470 |
| 0.2679 | 51.06 | 4800 | 0.3602 | 0.8568 | 0.8570 |
| 0.2659 | 53.19 | 5000 | 0.3933 | 0.8455 | 0.8464 |
| 0.2692 | 55.32 | 5200 | 0.3556 | 0.8555 | 0.8557 |
| 0.2665 | 57.45 | 5400 | 0.3706 | 0.8508 | 0.8510 |
| 0.2626 | 59.57 | 5600 | 0.3638 | 0.8568 | 0.8570 |
| 0.2672 | 61.7 | 5800 | 0.3498 | 0.8589 | 0.8591 |
| 0.2595 | 63.83 | 6000 | 0.3725 | 0.8546 | 0.8550 |
| 0.2637 | 65.96 | 6200 | 0.3743 | 0.8580 | 0.8584 |
| 0.2606 | 68.09 | 6400 | 0.3801 | 0.8559 | 0.8564 |
| 0.2586 | 70.21 | 6600 | 0.3770 | 0.8546 | 0.8550 |
| 0.2613 | 72.34 | 6800 | 0.3648 | 0.8547 | 0.8550 |
| 0.2608 | 74.47 | 7000 | 0.3994 | 0.8434 | 0.8444 |
| 0.2586 | 76.6 | 7200 | 0.3739 | 0.8547 | 0.8550 |
| 0.2612 | 78.72 | 7400 | 0.3657 | 0.8567 | 0.8570 |
| 0.2575 | 80.85 | 7600 | 0.3624 | 0.8554 | 0.8557 |
| 0.2573 | 82.98 | 7800 | 0.3695 | 0.8573 | 0.8577 |
| 0.2579 | 85.11 | 8000 | 0.3598 | 0.8574 | 0.8577 |
| 0.2579 | 87.23 | 8200 | 0.3569 | 0.8595 | 0.8597 |
| 0.253 | 89.36 | 8400 | 0.3685 | 0.8534 | 0.8537 |
| 0.2559 | 91.49 | 8600 | 0.3750 | 0.8547 | 0.8550 |
| 0.2566 | 93.62 | 8800 | 0.3699 | 0.8553 | 0.8557 |
| 0.2546 | 95.74 | 9000 | 0.3729 | 0.8560 | 0.8564 |
| 0.2524 | 97.87 | 9200 | 0.3649 | 0.8567 | 0.8570 |
| 0.2533 | 100.0 | 9400 | 0.3632 | 0.8568 | 0.8570 |
| 0.2557 | 102.13 | 9600 | 0.3663 | 0.8573 | 0.8577 |
| 0.2527 | 104.26 | 9800 | 0.3654 | 0.8567 | 0.8570 |
| 0.2531 | 106.38 | 10000 | 0.3667 | 0.8573 | 0.8577 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_EMP_H3-seqsight_65536_512_47M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3-seqsight_65536_512_47M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_65536_512_47M",
"region:us"
] | null | 2024-05-03T16:12:53+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us
| GUE\_EMP\_H3-seqsight\_65536\_512\_47M-L1\_f
============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_65536\_512\_47M on the mahdibaghbanzadeh/GUE\_EMP\_H3 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3391
* F1 Score: 0.8676
* Accuracy: 0.8677
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3-seqsight_65536_512_47M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_EMP_H3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3253
- F1 Score: 0.8696
- Accuracy: 0.8697
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.455 | 2.13 | 200 | 0.3926 | 0.8357 | 0.8357 |
| 0.3495 | 4.26 | 400 | 0.4119 | 0.8128 | 0.8143 |
| 0.3143 | 6.38 | 600 | 0.4219 | 0.8202 | 0.8216 |
| 0.2963 | 8.51 | 800 | 0.3801 | 0.8343 | 0.8350 |
| 0.2842 | 10.64 | 1000 | 0.3831 | 0.8389 | 0.8397 |
| 0.2751 | 12.77 | 1200 | 0.3779 | 0.8493 | 0.8497 |
| 0.2711 | 14.89 | 1400 | 0.3792 | 0.8452 | 0.8457 |
| 0.2664 | 17.02 | 1600 | 0.3620 | 0.8549 | 0.8550 |
| 0.2571 | 19.15 | 1800 | 0.4008 | 0.8409 | 0.8417 |
| 0.2527 | 21.28 | 2000 | 0.3837 | 0.8473 | 0.8477 |
| 0.2544 | 23.4 | 2200 | 0.3505 | 0.8595 | 0.8597 |
| 0.2405 | 25.53 | 2400 | 0.4114 | 0.8446 | 0.8450 |
| 0.247 | 27.66 | 2600 | 0.3592 | 0.8628 | 0.8631 |
| 0.2449 | 29.79 | 2800 | 0.3576 | 0.8554 | 0.8557 |
| 0.2392 | 31.91 | 3000 | 0.3568 | 0.8624 | 0.8624 |
| 0.2344 | 34.04 | 3200 | 0.3470 | 0.8670 | 0.8671 |
| 0.2327 | 36.17 | 3400 | 0.3992 | 0.8511 | 0.8517 |
| 0.2284 | 38.3 | 3600 | 0.3705 | 0.8622 | 0.8624 |
| 0.2305 | 40.43 | 3800 | 0.3835 | 0.8572 | 0.8577 |
| 0.2239 | 42.55 | 4000 | 0.3739 | 0.8556 | 0.8557 |
| 0.2211 | 44.68 | 4200 | 0.3594 | 0.8651 | 0.8651 |
| 0.2194 | 46.81 | 4400 | 0.3890 | 0.8541 | 0.8544 |
| 0.2202 | 48.94 | 4600 | 0.3940 | 0.8511 | 0.8517 |
| 0.215 | 51.06 | 4800 | 0.3617 | 0.8610 | 0.8611 |
| 0.2124 | 53.19 | 5000 | 0.4273 | 0.8477 | 0.8484 |
| 0.2171 | 55.32 | 5200 | 0.3822 | 0.8595 | 0.8597 |
| 0.2099 | 57.45 | 5400 | 0.3963 | 0.8508 | 0.8510 |
| 0.2073 | 59.57 | 5600 | 0.3901 | 0.8549 | 0.8550 |
| 0.2075 | 61.7 | 5800 | 0.3635 | 0.8651 | 0.8651 |
| 0.2038 | 63.83 | 6000 | 0.4093 | 0.8520 | 0.8524 |
| 0.2061 | 65.96 | 6200 | 0.4333 | 0.8450 | 0.8457 |
| 0.2036 | 68.09 | 6400 | 0.4143 | 0.8540 | 0.8544 |
| 0.2006 | 70.21 | 6600 | 0.4012 | 0.8589 | 0.8591 |
| 0.2009 | 72.34 | 6800 | 0.3996 | 0.8588 | 0.8591 |
| 0.1995 | 74.47 | 7000 | 0.4453 | 0.8435 | 0.8444 |
| 0.1969 | 76.6 | 7200 | 0.3989 | 0.8541 | 0.8544 |
| 0.2014 | 78.72 | 7400 | 0.3923 | 0.8543 | 0.8544 |
| 0.199 | 80.85 | 7600 | 0.4155 | 0.8495 | 0.8497 |
| 0.1962 | 82.98 | 7800 | 0.4144 | 0.8549 | 0.8550 |
| 0.1938 | 85.11 | 8000 | 0.3919 | 0.8589 | 0.8591 |
| 0.1919 | 87.23 | 8200 | 0.4019 | 0.8523 | 0.8524 |
| 0.1905 | 89.36 | 8400 | 0.4164 | 0.8530 | 0.8530 |
| 0.1898 | 91.49 | 8600 | 0.4227 | 0.8541 | 0.8544 |
| 0.1908 | 93.62 | 8800 | 0.4182 | 0.8548 | 0.8550 |
| 0.1906 | 95.74 | 9000 | 0.4258 | 0.8514 | 0.8517 |
| 0.188 | 97.87 | 9200 | 0.4151 | 0.8555 | 0.8557 |
| 0.1886 | 100.0 | 9400 | 0.4124 | 0.8543 | 0.8544 |
| 0.1891 | 102.13 | 9600 | 0.4163 | 0.8522 | 0.8524 |
| 0.1867 | 104.26 | 9800 | 0.4136 | 0.8536 | 0.8537 |
| 0.1866 | 106.38 | 10000 | 0.4139 | 0.8542 | 0.8544 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_EMP_H3-seqsight_65536_512_47M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3-seqsight_65536_512_47M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_65536_512_47M",
"region:us"
] | null | 2024-05-03T16:12:54+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us
| GUE\_EMP\_H3-seqsight\_65536\_512\_47M-L8\_f
============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_65536\_512\_47M on the mahdibaghbanzadeh/GUE\_EMP\_H3 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3253
* F1 Score: 0.8696
* Accuracy: 0.8697
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3-seqsight_65536_512_47M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_EMP_H3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3472
- F1 Score: 0.8683
- Accuracy: 0.8684
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.4364 | 2.13 | 200 | 0.3952 | 0.8232 | 0.8236 |
| 0.3132 | 4.26 | 400 | 0.3627 | 0.8461 | 0.8464 |
| 0.2868 | 6.38 | 600 | 0.4096 | 0.8341 | 0.8350 |
| 0.275 | 8.51 | 800 | 0.3467 | 0.8561 | 0.8564 |
| 0.2648 | 10.64 | 1000 | 0.3499 | 0.8561 | 0.8564 |
| 0.2538 | 12.77 | 1200 | 0.3487 | 0.8589 | 0.8591 |
| 0.2482 | 14.89 | 1400 | 0.4163 | 0.8368 | 0.8377 |
| 0.2415 | 17.02 | 1600 | 0.3507 | 0.8671 | 0.8671 |
| 0.2295 | 19.15 | 1800 | 0.3938 | 0.8472 | 0.8477 |
| 0.2222 | 21.28 | 2000 | 0.3751 | 0.8637 | 0.8637 |
| 0.2213 | 23.4 | 2200 | 0.3761 | 0.8635 | 0.8637 |
| 0.2024 | 25.53 | 2400 | 0.4104 | 0.8596 | 0.8597 |
| 0.2089 | 27.66 | 2600 | 0.4076 | 0.8607 | 0.8611 |
| 0.198 | 29.79 | 2800 | 0.4025 | 0.8620 | 0.8624 |
| 0.1919 | 31.91 | 3000 | 0.4010 | 0.8570 | 0.8570 |
| 0.1828 | 34.04 | 3200 | 0.3662 | 0.8677 | 0.8677 |
| 0.1758 | 36.17 | 3400 | 0.4293 | 0.8629 | 0.8631 |
| 0.1685 | 38.3 | 3600 | 0.4412 | 0.8574 | 0.8577 |
| 0.1667 | 40.43 | 3800 | 0.4562 | 0.8585 | 0.8591 |
| 0.1578 | 42.55 | 4000 | 0.4707 | 0.8549 | 0.8550 |
| 0.1513 | 44.68 | 4200 | 0.4599 | 0.8575 | 0.8577 |
| 0.1463 | 46.81 | 4400 | 0.4876 | 0.8582 | 0.8584 |
| 0.1431 | 48.94 | 4600 | 0.5163 | 0.8484 | 0.8490 |
| 0.1349 | 51.06 | 4800 | 0.4653 | 0.8635 | 0.8637 |
| 0.1296 | 53.19 | 5000 | 0.5254 | 0.8593 | 0.8597 |
| 0.1278 | 55.32 | 5200 | 0.5322 | 0.8526 | 0.8530 |
| 0.1199 | 57.45 | 5400 | 0.5515 | 0.8533 | 0.8537 |
| 0.117 | 59.57 | 5600 | 0.5362 | 0.8581 | 0.8584 |
| 0.1133 | 61.7 | 5800 | 0.4982 | 0.8569 | 0.8570 |
| 0.1118 | 63.83 | 6000 | 0.5740 | 0.8453 | 0.8457 |
| 0.11 | 65.96 | 6200 | 0.6111 | 0.8408 | 0.8417 |
| 0.1016 | 68.09 | 6400 | 0.6034 | 0.8464 | 0.8470 |
| 0.1013 | 70.21 | 6600 | 0.5935 | 0.8506 | 0.8510 |
| 0.0944 | 72.34 | 6800 | 0.5933 | 0.8560 | 0.8564 |
| 0.0944 | 74.47 | 7000 | 0.6320 | 0.8443 | 0.8450 |
| 0.09 | 76.6 | 7200 | 0.6099 | 0.8539 | 0.8544 |
| 0.0905 | 78.72 | 7400 | 0.6381 | 0.8525 | 0.8530 |
| 0.093 | 80.85 | 7600 | 0.6642 | 0.8466 | 0.8470 |
| 0.0888 | 82.98 | 7800 | 0.6228 | 0.8553 | 0.8557 |
| 0.0854 | 85.11 | 8000 | 0.6298 | 0.8526 | 0.8530 |
| 0.0839 | 87.23 | 8200 | 0.6514 | 0.8498 | 0.8504 |
| 0.0789 | 89.36 | 8400 | 0.6437 | 0.8515 | 0.8517 |
| 0.0796 | 91.49 | 8600 | 0.6850 | 0.8465 | 0.8470 |
| 0.0812 | 93.62 | 8800 | 0.6603 | 0.8466 | 0.8470 |
| 0.0816 | 95.74 | 9000 | 0.6931 | 0.8450 | 0.8457 |
| 0.0772 | 97.87 | 9200 | 0.6764 | 0.8478 | 0.8484 |
| 0.075 | 100.0 | 9400 | 0.6582 | 0.8500 | 0.8504 |
| 0.0772 | 102.13 | 9600 | 0.6674 | 0.8493 | 0.8497 |
| 0.0715 | 104.26 | 9800 | 0.6791 | 0.8452 | 0.8457 |
| 0.0758 | 106.38 | 10000 | 0.6793 | 0.8465 | 0.8470 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_EMP_H3-seqsight_65536_512_47M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3-seqsight_65536_512_47M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_65536_512_47M",
"region:us"
] | null | 2024-05-03T16:12:56+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us
| GUE\_EMP\_H3-seqsight\_65536\_512\_47M-L32\_f
=============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_65536\_512\_47M on the mahdibaghbanzadeh/GUE\_EMP\_H3 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3472
* F1 Score: 0.8683
* Accuracy: 0.8684
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H4ac-seqsight_65536_512_47M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_EMP_H4ac](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H4ac) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5702
- F1 Score: 0.7087
- Accuracy: 0.7085
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.6417 | 0.93 | 200 | 0.5973 | 0.6815 | 0.6812 |
| 0.6039 | 1.87 | 400 | 0.6004 | 0.6859 | 0.6871 |
| 0.5916 | 2.8 | 600 | 0.5816 | 0.7040 | 0.7038 |
| 0.5863 | 3.74 | 800 | 0.5817 | 0.7044 | 0.7041 |
| 0.5795 | 4.67 | 1000 | 0.5882 | 0.7058 | 0.7062 |
| 0.5751 | 5.61 | 1200 | 0.5956 | 0.7051 | 0.7067 |
| 0.5732 | 6.54 | 1400 | 0.5767 | 0.7128 | 0.7126 |
| 0.5653 | 7.48 | 1600 | 0.5786 | 0.7123 | 0.7120 |
| 0.5723 | 8.41 | 1800 | 0.5774 | 0.7119 | 0.7117 |
| 0.5682 | 9.35 | 2000 | 0.5854 | 0.7109 | 0.7117 |
| 0.5614 | 10.28 | 2200 | 0.5768 | 0.7124 | 0.7123 |
| 0.5653 | 11.21 | 2400 | 0.5738 | 0.7158 | 0.7158 |
| 0.5605 | 12.15 | 2600 | 0.5763 | 0.7136 | 0.7138 |
| 0.559 | 13.08 | 2800 | 0.5887 | 0.7114 | 0.7126 |
| 0.5598 | 14.02 | 3000 | 0.5760 | 0.7146 | 0.7150 |
| 0.5565 | 14.95 | 3200 | 0.5703 | 0.7176 | 0.7176 |
| 0.5541 | 15.89 | 3400 | 0.5891 | 0.7101 | 0.7120 |
| 0.552 | 16.82 | 3600 | 0.5692 | 0.7192 | 0.7191 |
| 0.5579 | 17.76 | 3800 | 0.5672 | 0.7212 | 0.7211 |
| 0.5528 | 18.69 | 4000 | 0.5698 | 0.7187 | 0.7188 |
| 0.5492 | 19.63 | 4200 | 0.5783 | 0.7161 | 0.7170 |
| 0.5525 | 20.56 | 4400 | 0.5653 | 0.7226 | 0.7226 |
| 0.5496 | 21.5 | 4600 | 0.5951 | 0.7070 | 0.7103 |
| 0.5495 | 22.43 | 4800 | 0.5678 | 0.7221 | 0.7223 |
| 0.5521 | 23.36 | 5000 | 0.5792 | 0.7182 | 0.7196 |
| 0.5458 | 24.3 | 5200 | 0.5668 | 0.7237 | 0.7238 |
| 0.5497 | 25.23 | 5400 | 0.5603 | 0.7257 | 0.7255 |
| 0.5482 | 26.17 | 5600 | 0.5680 | 0.7232 | 0.7235 |
| 0.5479 | 27.1 | 5800 | 0.5718 | 0.7214 | 0.7223 |
| 0.5439 | 28.04 | 6000 | 0.5623 | 0.7295 | 0.7293 |
| 0.5477 | 28.97 | 6200 | 0.5758 | 0.7186 | 0.7196 |
| 0.5463 | 29.91 | 6400 | 0.5683 | 0.7237 | 0.7240 |
| 0.5461 | 30.84 | 6600 | 0.5867 | 0.7164 | 0.7185 |
| 0.5448 | 31.78 | 6800 | 0.5662 | 0.7250 | 0.7252 |
| 0.5426 | 32.71 | 7000 | 0.5676 | 0.7240 | 0.7243 |
| 0.5419 | 33.64 | 7200 | 0.5682 | 0.7239 | 0.7246 |
| 0.5439 | 34.58 | 7400 | 0.5696 | 0.7216 | 0.7223 |
| 0.5425 | 35.51 | 7600 | 0.5626 | 0.7284 | 0.7284 |
| 0.5385 | 36.45 | 7800 | 0.5638 | 0.7287 | 0.7287 |
| 0.5443 | 37.38 | 8000 | 0.5762 | 0.7198 | 0.7211 |
| 0.5399 | 38.32 | 8200 | 0.5670 | 0.7270 | 0.7276 |
| 0.5409 | 39.25 | 8400 | 0.5653 | 0.7284 | 0.7287 |
| 0.5439 | 40.19 | 8600 | 0.5633 | 0.7277 | 0.7279 |
| 0.5406 | 41.12 | 8800 | 0.5669 | 0.7262 | 0.7267 |
| 0.5393 | 42.06 | 9000 | 0.5684 | 0.7268 | 0.7273 |
| 0.543 | 42.99 | 9200 | 0.5738 | 0.7209 | 0.7220 |
| 0.5384 | 43.93 | 9400 | 0.5725 | 0.7238 | 0.7246 |
| 0.5406 | 44.86 | 9600 | 0.5664 | 0.7266 | 0.7270 |
| 0.542 | 45.79 | 9800 | 0.5679 | 0.7259 | 0.7264 |
| 0.5386 | 46.73 | 10000 | 0.5694 | 0.7248 | 0.7255 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_EMP_H4ac-seqsight_65536_512_47M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H4ac-seqsight_65536_512_47M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_65536_512_47M",
"region:us"
] | null | 2024-05-03T16:13:19+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us
| GUE\_EMP\_H4ac-seqsight\_65536\_512\_47M-L1\_f
==============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_65536\_512\_47M on the mahdibaghbanzadeh/GUE\_EMP\_H4ac dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5702
* F1 Score: 0.7087
* Accuracy: 0.7085
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Base Noise Ko - Dearlie
This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the Noise Data dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0157
- Cer: 41.4126
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer |
|:-------------:|:------:|:----:|:---------------:|:-------:|
| 1.3649 | 0.8780 | 1000 | 1.3839 | 54.5000 |
| 1.0173 | 1.7559 | 2000 | 1.1473 | 52.0300 |
| 0.7373 | 2.6339 | 3000 | 1.0454 | 43.7234 |
| 0.5197 | 3.5119 | 4000 | 1.0157 | 41.4126 |
### Framework versions
- Transformers 4.41.0.dev0
- Pytorch 2.3.0+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"language": ["ko"], "license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["AIHub/noise"], "base_model": "openai/whisper-base", "model-index": [{"name": "Whisper Base Noise Ko - Dearlie", "results": []}]} | Dearlie/whisper-noise3 | null | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"ko",
"dataset:AIHub/noise",
"base_model:openai/whisper-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T16:13:31+00:00 | [] | [
"ko"
] | TAGS
#transformers #tensorboard #safetensors #whisper #automatic-speech-recognition #generated_from_trainer #ko #dataset-AIHub/noise #base_model-openai/whisper-base #license-apache-2.0 #endpoints_compatible #region-us
| Whisper Base Noise Ko - Dearlie
===============================
This model is a fine-tuned version of openai/whisper-base on the Noise Data dataset.
It achieves the following results on the evaluation set:
* Loss: 1.0157
* Cer: 41.4126
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 500
* training\_steps: 4000
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.41.0.dev0
* Pytorch 2.3.0+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* training\\_steps: 4000\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.41.0.dev0\n* Pytorch 2.3.0+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #whisper #automatic-speech-recognition #generated_from_trainer #ko #dataset-AIHub/noise #base_model-openai/whisper-base #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* training\\_steps: 4000\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.41.0.dev0\n* Pytorch 2.3.0+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BioGPT_DocBot_SonatafyAI_V1
This model is a fine-tuned version of [microsoft/biogpt](https://huggingface.co/microsoft/biogpt) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8762
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.1291 | 1.0 | 1109 | 2.9973 |
| 2.8675 | 2.0 | 2218 | 2.9057 |
| 2.7264 | 3.0 | 3327 | 2.8822 |
| 2.6095 | 4.0 | 4436 | 2.8706 |
| 2.548 | 5.0 | 5545 | 2.8762 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "mit", "tags": ["generated_from_trainer"], "base_model": "microsoft/biogpt", "model-index": [{"name": "BioGPT_DocBot_SonatafyAI_V1", "results": []}]} | Sonatafyai/BioGPT_DocBot_SonatafyAI_V1 | null | [
"transformers",
"tensorboard",
"safetensors",
"biogpt",
"text-generation",
"generated_from_trainer",
"base_model:microsoft/biogpt",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T16:13:46+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #biogpt #text-generation #generated_from_trainer #base_model-microsoft/biogpt #license-mit #autotrain_compatible #endpoints_compatible #region-us
| BioGPT\_DocBot\_SonatafyAI\_V1
==============================
This model is a fine-tuned version of microsoft/biogpt on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 2.8762
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.40.1
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #biogpt #text-generation #generated_from_trainer #base_model-microsoft/biogpt #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H4ac-seqsight_65536_512_47M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_EMP_H4ac](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H4ac) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5638
- F1 Score: 0.7186
- Accuracy: 0.7185
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.6275 | 0.93 | 200 | 0.5856 | 0.6927 | 0.6924 |
| 0.589 | 1.87 | 400 | 0.5897 | 0.7023 | 0.7032 |
| 0.5744 | 2.8 | 600 | 0.5709 | 0.7072 | 0.7070 |
| 0.5693 | 3.74 | 800 | 0.5721 | 0.7129 | 0.7126 |
| 0.5626 | 4.67 | 1000 | 0.5702 | 0.7176 | 0.7173 |
| 0.5563 | 5.61 | 1200 | 0.5767 | 0.7101 | 0.7109 |
| 0.5549 | 6.54 | 1400 | 0.5737 | 0.7128 | 0.7135 |
| 0.5438 | 7.48 | 1600 | 0.5655 | 0.7219 | 0.7217 |
| 0.5507 | 8.41 | 1800 | 0.5624 | 0.7231 | 0.7229 |
| 0.5442 | 9.35 | 2000 | 0.5599 | 0.7237 | 0.7235 |
| 0.5376 | 10.28 | 2200 | 0.5564 | 0.7277 | 0.7276 |
| 0.5401 | 11.21 | 2400 | 0.5580 | 0.7302 | 0.7299 |
| 0.534 | 12.15 | 2600 | 0.5662 | 0.7264 | 0.7267 |
| 0.5312 | 13.08 | 2800 | 0.5639 | 0.7289 | 0.7287 |
| 0.5321 | 14.02 | 3000 | 0.5519 | 0.7270 | 0.7267 |
| 0.5281 | 14.95 | 3200 | 0.5523 | 0.7317 | 0.7314 |
| 0.5244 | 15.89 | 3400 | 0.5527 | 0.7343 | 0.7340 |
| 0.5211 | 16.82 | 3600 | 0.5606 | 0.7300 | 0.7299 |
| 0.5255 | 17.76 | 3800 | 0.5725 | 0.7209 | 0.7220 |
| 0.522 | 18.69 | 4000 | 0.5527 | 0.7327 | 0.7326 |
| 0.5155 | 19.63 | 4200 | 0.5498 | 0.7376 | 0.7372 |
| 0.5197 | 20.56 | 4400 | 0.5515 | 0.7346 | 0.7343 |
| 0.5178 | 21.5 | 4600 | 0.5649 | 0.7226 | 0.7235 |
| 0.5147 | 22.43 | 4800 | 0.5514 | 0.7370 | 0.7367 |
| 0.5162 | 23.36 | 5000 | 0.5602 | 0.7330 | 0.7331 |
| 0.5122 | 24.3 | 5200 | 0.5509 | 0.7375 | 0.7372 |
| 0.5132 | 25.23 | 5400 | 0.5460 | 0.7381 | 0.7378 |
| 0.5129 | 26.17 | 5600 | 0.5504 | 0.7370 | 0.7367 |
| 0.5098 | 27.1 | 5800 | 0.5520 | 0.7352 | 0.7349 |
| 0.5072 | 28.04 | 6000 | 0.5511 | 0.7390 | 0.7387 |
| 0.5102 | 28.97 | 6200 | 0.5572 | 0.7351 | 0.7349 |
| 0.5079 | 29.91 | 6400 | 0.5584 | 0.7315 | 0.7314 |
| 0.5078 | 30.84 | 6600 | 0.5681 | 0.7282 | 0.7287 |
| 0.5028 | 31.78 | 6800 | 0.5553 | 0.7366 | 0.7364 |
| 0.5059 | 32.71 | 7000 | 0.5563 | 0.7352 | 0.7349 |
| 0.5042 | 33.64 | 7200 | 0.5569 | 0.7308 | 0.7311 |
| 0.5027 | 34.58 | 7400 | 0.5531 | 0.7366 | 0.7364 |
| 0.5031 | 35.51 | 7600 | 0.5531 | 0.7364 | 0.7361 |
| 0.499 | 36.45 | 7800 | 0.5564 | 0.7363 | 0.7361 |
| 0.5026 | 37.38 | 8000 | 0.5586 | 0.7339 | 0.7340 |
| 0.5005 | 38.32 | 8200 | 0.5524 | 0.7357 | 0.7355 |
| 0.4999 | 39.25 | 8400 | 0.5557 | 0.7345 | 0.7343 |
| 0.5014 | 40.19 | 8600 | 0.5566 | 0.7355 | 0.7352 |
| 0.4998 | 41.12 | 8800 | 0.5579 | 0.7347 | 0.7346 |
| 0.4986 | 42.06 | 9000 | 0.5580 | 0.7363 | 0.7361 |
| 0.5021 | 42.99 | 9200 | 0.5613 | 0.7319 | 0.7320 |
| 0.4965 | 43.93 | 9400 | 0.5597 | 0.7347 | 0.7346 |
| 0.4979 | 44.86 | 9600 | 0.5571 | 0.7357 | 0.7355 |
| 0.5001 | 45.79 | 9800 | 0.5572 | 0.7336 | 0.7334 |
| 0.4954 | 46.73 | 10000 | 0.5587 | 0.7353 | 0.7352 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_EMP_H4ac-seqsight_65536_512_47M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H4ac-seqsight_65536_512_47M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_65536_512_47M",
"region:us"
] | null | 2024-05-03T16:14:32+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us
| GUE\_EMP\_H4ac-seqsight\_65536\_512\_47M-L8\_f
==============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_65536\_512\_47M on the mahdibaghbanzadeh/GUE\_EMP\_H4ac dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5638
* F1 Score: 0.7186
* Accuracy: 0.7185
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H4ac-seqsight_65536_512_47M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_EMP_H4ac](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H4ac) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5976
- F1 Score: 0.7230
- Accuracy: 0.7229
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.6163 | 0.93 | 200 | 0.5941 | 0.6959 | 0.6965 |
| 0.5793 | 1.87 | 400 | 0.5846 | 0.7054 | 0.7062 |
| 0.5645 | 2.8 | 600 | 0.5619 | 0.7224 | 0.7223 |
| 0.5566 | 3.74 | 800 | 0.5643 | 0.7247 | 0.7243 |
| 0.5498 | 4.67 | 1000 | 0.5582 | 0.7258 | 0.7255 |
| 0.5412 | 5.61 | 1200 | 0.5765 | 0.7119 | 0.7132 |
| 0.5392 | 6.54 | 1400 | 0.5703 | 0.7175 | 0.7182 |
| 0.5263 | 7.48 | 1600 | 0.5556 | 0.7307 | 0.7305 |
| 0.5297 | 8.41 | 1800 | 0.5542 | 0.7281 | 0.7279 |
| 0.522 | 9.35 | 2000 | 0.5545 | 0.7337 | 0.7334 |
| 0.5143 | 10.28 | 2200 | 0.5502 | 0.7321 | 0.7320 |
| 0.5141 | 11.21 | 2400 | 0.5602 | 0.7309 | 0.7308 |
| 0.5068 | 12.15 | 2600 | 0.5658 | 0.7300 | 0.7302 |
| 0.5029 | 13.08 | 2800 | 0.5543 | 0.7323 | 0.7320 |
| 0.5013 | 14.02 | 3000 | 0.5646 | 0.7318 | 0.7317 |
| 0.4941 | 14.95 | 3200 | 0.5613 | 0.7361 | 0.7358 |
| 0.4883 | 15.89 | 3400 | 0.5586 | 0.7370 | 0.7367 |
| 0.4829 | 16.82 | 3600 | 0.5657 | 0.7337 | 0.7337 |
| 0.4853 | 17.76 | 3800 | 0.5990 | 0.7200 | 0.7220 |
| 0.479 | 18.69 | 4000 | 0.5616 | 0.7352 | 0.7349 |
| 0.4725 | 19.63 | 4200 | 0.5568 | 0.7369 | 0.7367 |
| 0.475 | 20.56 | 4400 | 0.5594 | 0.7378 | 0.7375 |
| 0.4682 | 21.5 | 4600 | 0.5767 | 0.7313 | 0.7317 |
| 0.4652 | 22.43 | 4800 | 0.5581 | 0.7384 | 0.7381 |
| 0.4614 | 23.36 | 5000 | 0.5728 | 0.7331 | 0.7331 |
| 0.4579 | 24.3 | 5200 | 0.5709 | 0.7407 | 0.7405 |
| 0.4564 | 25.23 | 5400 | 0.5619 | 0.7370 | 0.7367 |
| 0.4548 | 26.17 | 5600 | 0.5749 | 0.7372 | 0.7370 |
| 0.4519 | 27.1 | 5800 | 0.5706 | 0.7326 | 0.7323 |
| 0.4479 | 28.04 | 6000 | 0.5742 | 0.7396 | 0.7393 |
| 0.446 | 28.97 | 6200 | 0.5767 | 0.7378 | 0.7375 |
| 0.4445 | 29.91 | 6400 | 0.5753 | 0.7379 | 0.7378 |
| 0.4399 | 30.84 | 6600 | 0.5980 | 0.7372 | 0.7372 |
| 0.4351 | 31.78 | 6800 | 0.5851 | 0.7386 | 0.7384 |
| 0.4336 | 32.71 | 7000 | 0.5883 | 0.7352 | 0.7349 |
| 0.4319 | 33.64 | 7200 | 0.5929 | 0.7337 | 0.7340 |
| 0.4338 | 34.58 | 7400 | 0.5883 | 0.7369 | 0.7370 |
| 0.4277 | 35.51 | 7600 | 0.5873 | 0.7371 | 0.7370 |
| 0.424 | 36.45 | 7800 | 0.5890 | 0.7401 | 0.7399 |
| 0.4254 | 37.38 | 8000 | 0.5903 | 0.7411 | 0.7411 |
| 0.4238 | 38.32 | 8200 | 0.5865 | 0.7362 | 0.7361 |
| 0.4205 | 39.25 | 8400 | 0.5941 | 0.7387 | 0.7384 |
| 0.4205 | 40.19 | 8600 | 0.5969 | 0.7378 | 0.7375 |
| 0.4193 | 41.12 | 8800 | 0.5965 | 0.7401 | 0.7399 |
| 0.4182 | 42.06 | 9000 | 0.5959 | 0.7398 | 0.7396 |
| 0.4175 | 42.99 | 9200 | 0.6042 | 0.7358 | 0.7358 |
| 0.4139 | 43.93 | 9400 | 0.6035 | 0.7378 | 0.7378 |
| 0.4173 | 44.86 | 9600 | 0.5966 | 0.7403 | 0.7402 |
| 0.4187 | 45.79 | 9800 | 0.5986 | 0.7397 | 0.7396 |
| 0.4146 | 46.73 | 10000 | 0.5997 | 0.7376 | 0.7375 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_EMP_H4ac-seqsight_65536_512_47M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H4ac-seqsight_65536_512_47M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_65536_512_47M",
"region:us"
] | null | 2024-05-03T16:14:32+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us
| GUE\_EMP\_H4ac-seqsight\_65536\_512\_47M-L32\_f
===============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_65536\_512\_47M on the mahdibaghbanzadeh/GUE\_EMP\_H4ac dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5976
* F1 Score: 0.7230
* Accuracy: 0.7229
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lora_fine_tuned_boolq
This model is a fine-tuned version of [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5547
- Accuracy: 0.7778
- F1: 0.6806
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 400
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-------:|:----:|:---------------:|:--------:|:------:|
| 0.6762 | 4.1667 | 50 | 0.5947 | 0.7778 | 0.6806 |
| 0.6639 | 8.3333 | 100 | 0.5719 | 0.7778 | 0.6806 |
| 0.6555 | 12.5 | 150 | 0.5648 | 0.7778 | 0.6806 |
| 0.6605 | 16.6667 | 200 | 0.5615 | 0.7778 | 0.6806 |
| 0.6612 | 20.8333 | 250 | 0.5568 | 0.7778 | 0.6806 |
| 0.6508 | 25.0 | 300 | 0.5567 | 0.7778 | 0.6806 |
| 0.6491 | 29.1667 | 350 | 0.5550 | 0.7778 | 0.6806 |
| 0.663 | 33.3333 | 400 | 0.5547 | 0.7778 | 0.6806 |
### Framework versions
- PEFT 0.10.1.dev0
- Transformers 4.40.1
- Pytorch 2.3.0
- Datasets 2.19.0
- Tokenizers 0.19.1 | {"license": "apache-2.0", "library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "base_model": "google-bert/bert-base-uncased", "model-index": [{"name": "lora_fine_tuned_boolq", "results": []}]} | lenatr99/lora_fine_tuned_boolq | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"license:apache-2.0",
"region:us"
] | null | 2024-05-03T16:14:40+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-google-bert/bert-base-uncased #license-apache-2.0 #region-us
| lora\_fine\_tuned\_boolq
========================
This model is a fine-tuned version of google-bert/bert-base-uncased on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5547
* Accuracy: 0.7778
* F1: 0.6806
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 400
### Training results
### Framework versions
* PEFT 0.10.1.dev0
* Transformers 4.40.1
* Pytorch 2.3.0
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 400",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.1.dev0\n* Transformers 4.40.1\n* Pytorch 2.3.0\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-google-bert/bert-base-uncased #license-apache-2.0 #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 400",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.1.dev0\n* Transformers 4.40.1\n* Pytorch 2.3.0\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
text-to-image | diffusers |
# Juggernaut X Hyper + RunDiffusion Official (Community Version)
![juggernaut X Hyper previews](https://storage.googleapis.com/run-diffusion-public-assets/juggernaut-x/juggernaut-x-hyper-0-832.webp)
![RunDiffusion Logo](https://imagedelivery.net/siANnpeNAc_S2q1M3-eDrA/ca2b388d-a835-490c-dec0-e764bee8d000/micro)
This model is not permitted to be used behind API services. Please contact [juggernaut@rundiffusion.com](mailto:juggernaut@rundiffusion.com) for business inquires, commercial licensing, custom models, and consultation.
Juggernaut X (SAFE) is available exclusivly on [RunDiffusion.com](http://rundiffusion.com/?utm_source=huggingface&utm_medium=referral&utm_campaign=juggernautv10hyper)
🎉 Juggernaut X Hyper World Wide Release! 🌍
After almost two months, we are thrilled to announce the next version of Juggernaut is ready to launch! Introducing Juggernaut X Hyper. 🚀 If you would have been following us on Twitter (X) you would have been seeing the test images. If you aren't following us, do it now! https://x.com/RunDiffusion & Kandoo's new account needs some new followers. Help him out. https://x.com/Juggernaut_AI
- TWO Versions of Juggernaut X Hyper! SFW 🌟 and NSFW 🔥
- Fully trained from the ground up using the GPT4 Vision Captioning tool by LEOSAM 🛠️
- Much improved prompt adherence ✅
- Expanded and cleaner dataset with higher quality images 🖼️
- Improved classifications of shots (Full Body, Midshots, Portraits, etc) 📸
- Enhanced text generation capability 📝
- Two different prompting techniques, Natural and Tagging style 🏷️
- Enhanced by RunDiffusion Photo for refinement of details 🧐
Read more about this version here https://rundiffusion.com/juggernaut-xl
Dual Version Release 🔄
A Safe for Work (SFW) and a Not Safe for Work (NSFW) version of Juggernaut X Hyper will be available. This dual release strategy is designed to cater to diverse preferences and ensure inclusivity, offering the perfect solution for every user.
Our newest Safe for Work edition is available right now exclusively through Fooocus on RunDiffusion.com.
Launch Fooocus on RunDiffusion
Find Juggernaut X_RunDiffusion_Hyper.safetensors and start generating!
It allows users to generate high-quality, suitable images while adhering to safe content guidelines. This version is particularly user-friendly, requiring only simple, straightforward prompts. It's ideal for the workplace, students, educators, and families.
SAFE stands for Suitable Ai For Everyone. 🌈
Conversely, the Not Safe for Work version offers unrestricted creative freedom across all categories and spectrums. This model is perfect for those seeking less constrained artistic expression and is available for free on Civitai.com, though a license is required for commercial use. 🎨
Both models of Juggernaut X Hyper (v10) represent our commitment to fostering a creative community that respects diverse needs and preferences. 🤝
Prompting Guide 📘
Because everything has been trained from the ground up, prompting is a bit different. (Simpler, don't worry) @Kandoo has created a guide to help you seamlessly integrate this powerful model into your workflow, enabling you to leverage its advanced capabilities without feeling overwhelmed. Download it here: https://rundiffusion.com/juggernaut-xl#nav
As always, we love our community and feel so lucky to be in this position to bring these awesome tools and models to you amazing diffusers. Thanks for supporting us since our first day back in 2022. Going on TWO YEARS since we first started using generative Ai. Time flies when you're having fun. wow!
Don't forget to follow us on Twitter where we have way more updates on big things we're working on. The future is bright
https://x.com/RunDiffusion
-RunDiffusion Team
![https://rundiffusion.com?utm_source=huggingface&utm_medium=referral&utm_campaign=juggernautv10hyper](https://i.imgur.com/fKPEqSu.jpg) | {"language": ["en"], "license": "creativeml-openrail-m", "library_name": "diffusers", "tags": ["art", "people", "diffusion", "Cinematic", "Photography", "Landscape", "Interior", "Food", "Car", "Wildlife", "Architecture"], "thumbnail": "https://storage.googleapis.com/run-diffusion-public-assets/juggernaut-x/juggernaut-x-hyper-0-256.webp", "base_model": "stabilityai/stable-diffusion-xl-base-1.0", "pipeline_tag": "text-to-image"} | RunDiffusion/Juggernaut-X-Hyper | null | [
"diffusers",
"art",
"people",
"diffusion",
"Cinematic",
"Photography",
"Landscape",
"Interior",
"Food",
"Car",
"Wildlife",
"Architecture",
"text-to-image",
"en",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | null | 2024-05-03T16:14:58+00:00 | [] | [
"en"
] | TAGS
#diffusers #art #people #diffusion #Cinematic #Photography #Landscape #Interior #Food #Car #Wildlife #Architecture #text-to-image #en #base_model-stabilityai/stable-diffusion-xl-base-1.0 #license-creativeml-openrail-m #endpoints_compatible #diffusers-StableDiffusionXLPipeline #region-us
|
# Juggernaut X Hyper + RunDiffusion Official (Community Version)
!juggernaut X Hyper previews
!RunDiffusion Logo
This model is not permitted to be used behind API services. Please contact juggernaut@URL for business inquires, commercial licensing, custom models, and consultation.
Juggernaut X (SAFE) is available exclusivly on URL
Juggernaut X Hyper World Wide Release!
After almost two months, we are thrilled to announce the next version of Juggernaut is ready to launch! Introducing Juggernaut X Hyper. If you would have been following us on Twitter (X) you would have been seeing the test images. If you aren't following us, do it now! https://x.com/RunDiffusion & Kandoo's new account needs some new followers. Help him out. https://x.com/Juggernaut_AI
- TWO Versions of Juggernaut X Hyper! SFW and NSFW
- Fully trained from the ground up using the GPT4 Vision Captioning tool by LEOSAM ️
- Much improved prompt adherence
- Expanded and cleaner dataset with higher quality images ️
- Improved classifications of shots (Full Body, Midshots, Portraits, etc)
- Enhanced text generation capability
- Two different prompting techniques, Natural and Tagging style ️
- Enhanced by RunDiffusion Photo for refinement of details
Read more about this version here URL
Dual Version Release
A Safe for Work (SFW) and a Not Safe for Work (NSFW) version of Juggernaut X Hyper will be available. This dual release strategy is designed to cater to diverse preferences and ensure inclusivity, offering the perfect solution for every user.
Our newest Safe for Work edition is available right now exclusively through Fooocus on URL.
Launch Fooocus on RunDiffusion
Find Juggernaut X_RunDiffusion_Hyper.safetensors and start generating!
It allows users to generate high-quality, suitable images while adhering to safe content guidelines. This version is particularly user-friendly, requiring only simple, straightforward prompts. It's ideal for the workplace, students, educators, and families.
SAFE stands for Suitable Ai For Everyone.
Conversely, the Not Safe for Work version offers unrestricted creative freedom across all categories and spectrums. This model is perfect for those seeking less constrained artistic expression and is available for free on URL, though a license is required for commercial use.
Both models of Juggernaut X Hyper (v10) represent our commitment to fostering a creative community that respects diverse needs and preferences.
Prompting Guide
Because everything has been trained from the ground up, prompting is a bit different. (Simpler, don't worry) @Kandoo has created a guide to help you seamlessly integrate this powerful model into your workflow, enabling you to leverage its advanced capabilities without feeling overwhelmed. Download it here: URL
As always, we love our community and feel so lucky to be in this position to bring these awesome tools and models to you amazing diffusers. Thanks for supporting us since our first day back in 2022. Going on TWO YEARS since we first started using generative Ai. Time flies when you're having fun. wow!
Don't forget to follow us on Twitter where we have way more updates on big things we're working on. The future is bright
https://x.com/RunDiffusion
-RunDiffusion Team
!URL?utm_source=huggingface&utm_medium=referral&utm_campaign=juggernautv10hyper | [
"# Juggernaut X Hyper + RunDiffusion Official (Community Version)\n!juggernaut X Hyper previews\n!RunDiffusion Logo\nThis model is not permitted to be used behind API services. Please contact juggernaut@URL for business inquires, commercial licensing, custom models, and consultation.\n\nJuggernaut X (SAFE) is available exclusivly on URL\n\n Juggernaut X Hyper World Wide Release! \n\nAfter almost two months, we are thrilled to announce the next version of Juggernaut is ready to launch! Introducing Juggernaut X Hyper. If you would have been following us on Twitter (X) you would have been seeing the test images. If you aren't following us, do it now! https://x.com/RunDiffusion & Kandoo's new account needs some new followers. Help him out. https://x.com/Juggernaut_AI\n\n- TWO Versions of Juggernaut X Hyper! SFW and NSFW \n- Fully trained from the ground up using the GPT4 Vision Captioning tool by LEOSAM ️\n- Much improved prompt adherence \n- Expanded and cleaner dataset with higher quality images ️\n- Improved classifications of shots (Full Body, Midshots, Portraits, etc) \n- Enhanced text generation capability \n- Two different prompting techniques, Natural and Tagging style ️\n- Enhanced by RunDiffusion Photo for refinement of details \n\nRead more about this version here URL\n\nDual Version Release \nA Safe for Work (SFW) and a Not Safe for Work (NSFW) version of Juggernaut X Hyper will be available. This dual release strategy is designed to cater to diverse preferences and ensure inclusivity, offering the perfect solution for every user.\n\nOur newest Safe for Work edition is available right now exclusively through Fooocus on URL. \nLaunch Fooocus on RunDiffusion\nFind Juggernaut X_RunDiffusion_Hyper.safetensors and start generating!\nIt allows users to generate high-quality, suitable images while adhering to safe content guidelines. This version is particularly user-friendly, requiring only simple, straightforward prompts. It's ideal for the workplace, students, educators, and families.\nSAFE stands for Suitable Ai For Everyone. \n\nConversely, the Not Safe for Work version offers unrestricted creative freedom across all categories and spectrums. This model is perfect for those seeking less constrained artistic expression and is available for free on URL, though a license is required for commercial use. \n\nBoth models of Juggernaut X Hyper (v10) represent our commitment to fostering a creative community that respects diverse needs and preferences. \n\nPrompting Guide \nBecause everything has been trained from the ground up, prompting is a bit different. (Simpler, don't worry) @Kandoo has created a guide to help you seamlessly integrate this powerful model into your workflow, enabling you to leverage its advanced capabilities without feeling overwhelmed. Download it here: URL\n\nAs always, we love our community and feel so lucky to be in this position to bring these awesome tools and models to you amazing diffusers. Thanks for supporting us since our first day back in 2022. Going on TWO YEARS since we first started using generative Ai. Time flies when you're having fun. wow!\n\nDon't forget to follow us on Twitter where we have way more updates on big things we're working on. The future is bright\n\nhttps://x.com/RunDiffusion\n\n-RunDiffusion Team \n\n!URL?utm_source=huggingface&utm_medium=referral&utm_campaign=juggernautv10hyper"
] | [
"TAGS\n#diffusers #art #people #diffusion #Cinematic #Photography #Landscape #Interior #Food #Car #Wildlife #Architecture #text-to-image #en #base_model-stabilityai/stable-diffusion-xl-base-1.0 #license-creativeml-openrail-m #endpoints_compatible #diffusers-StableDiffusionXLPipeline #region-us \n",
"# Juggernaut X Hyper + RunDiffusion Official (Community Version)\n!juggernaut X Hyper previews\n!RunDiffusion Logo\nThis model is not permitted to be used behind API services. Please contact juggernaut@URL for business inquires, commercial licensing, custom models, and consultation.\n\nJuggernaut X (SAFE) is available exclusivly on URL\n\n Juggernaut X Hyper World Wide Release! \n\nAfter almost two months, we are thrilled to announce the next version of Juggernaut is ready to launch! Introducing Juggernaut X Hyper. If you would have been following us on Twitter (X) you would have been seeing the test images. If you aren't following us, do it now! https://x.com/RunDiffusion & Kandoo's new account needs some new followers. Help him out. https://x.com/Juggernaut_AI\n\n- TWO Versions of Juggernaut X Hyper! SFW and NSFW \n- Fully trained from the ground up using the GPT4 Vision Captioning tool by LEOSAM ️\n- Much improved prompt adherence \n- Expanded and cleaner dataset with higher quality images ️\n- Improved classifications of shots (Full Body, Midshots, Portraits, etc) \n- Enhanced text generation capability \n- Two different prompting techniques, Natural and Tagging style ️\n- Enhanced by RunDiffusion Photo for refinement of details \n\nRead more about this version here URL\n\nDual Version Release \nA Safe for Work (SFW) and a Not Safe for Work (NSFW) version of Juggernaut X Hyper will be available. This dual release strategy is designed to cater to diverse preferences and ensure inclusivity, offering the perfect solution for every user.\n\nOur newest Safe for Work edition is available right now exclusively through Fooocus on URL. \nLaunch Fooocus on RunDiffusion\nFind Juggernaut X_RunDiffusion_Hyper.safetensors and start generating!\nIt allows users to generate high-quality, suitable images while adhering to safe content guidelines. This version is particularly user-friendly, requiring only simple, straightforward prompts. It's ideal for the workplace, students, educators, and families.\nSAFE stands for Suitable Ai For Everyone. \n\nConversely, the Not Safe for Work version offers unrestricted creative freedom across all categories and spectrums. This model is perfect for those seeking less constrained artistic expression and is available for free on URL, though a license is required for commercial use. \n\nBoth models of Juggernaut X Hyper (v10) represent our commitment to fostering a creative community that respects diverse needs and preferences. \n\nPrompting Guide \nBecause everything has been trained from the ground up, prompting is a bit different. (Simpler, don't worry) @Kandoo has created a guide to help you seamlessly integrate this powerful model into your workflow, enabling you to leverage its advanced capabilities without feeling overwhelmed. Download it here: URL\n\nAs always, we love our community and feel so lucky to be in this position to bring these awesome tools and models to you amazing diffusers. Thanks for supporting us since our first day back in 2022. Going on TWO YEARS since we first started using generative Ai. Time flies when you're having fun. wow!\n\nDon't forget to follow us on Twitter where we have way more updates on big things we're working on. The future is bright\n\nhttps://x.com/RunDiffusion\n\n-RunDiffusion Team \n\n!URL?utm_source=huggingface&utm_medium=referral&utm_campaign=juggernautv10hyper"
] |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Tiny chinese - VingeNie
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the Common Voice 16.1 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8984
- Cer Ortho: 82.5463
- Cer: 30.3139
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-08
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 25
- training_steps: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer Ortho | Cer |
|:-------------:|:------:|:----:|:---------------:|:---------:|:-------:|
| 0.2618 | 0.0400 | 50 | 0.8984 | 82.5463 | 30.3139 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.0.1+cu118
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"language": ["zh"], "license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["mozilla-foundation/common_voice_16_1"], "base_model": "openai/whisper-tiny", "model-index": [{"name": "Whisper Tiny chinese - VingeNie", "results": []}]} | VingeNie/whisper-tiny-zh_CN_lr4_3600 | null | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"zh",
"dataset:mozilla-foundation/common_voice_16_1",
"base_model:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T16:15:03+00:00 | [] | [
"zh"
] | TAGS
#transformers #tensorboard #safetensors #whisper #automatic-speech-recognition #generated_from_trainer #zh #dataset-mozilla-foundation/common_voice_16_1 #base_model-openai/whisper-tiny #license-apache-2.0 #endpoints_compatible #region-us
| Whisper Tiny chinese - VingeNie
===============================
This model is a fine-tuned version of openai/whisper-tiny on the Common Voice 16.1 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.8984
* Cer Ortho: 82.5463
* Cer: 30.3139
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 1e-08
* train\_batch\_size: 32
* eval\_batch\_size: 32
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 25
* training\_steps: 50
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.40.1
* Pytorch 2.0.1+cu118
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-08\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 25\n* training\\_steps: 50\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.0.1+cu118\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #whisper #automatic-speech-recognition #generated_from_trainer #zh #dataset-mozilla-foundation/common_voice_16_1 #base_model-openai/whisper-tiny #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-08\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 25\n* training\\_steps: 50\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.0.1+cu118\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
null | null | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
llama-2-ko-7b - GGUF
- Model creator: https://huggingface.co/beomi/
- Original model: https://huggingface.co/beomi/llama-2-ko-7b/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [llama-2-ko-7b.Q2_K.gguf](https://huggingface.co/RichardErkhov/beomi_-_llama-2-ko-7b-gguf/blob/main/llama-2-ko-7b.Q2_K.gguf) | Q2_K | 2.42GB |
| [llama-2-ko-7b.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/beomi_-_llama-2-ko-7b-gguf/blob/main/llama-2-ko-7b.IQ3_XS.gguf) | IQ3_XS | 2.67GB |
| [llama-2-ko-7b.IQ3_S.gguf](https://huggingface.co/RichardErkhov/beomi_-_llama-2-ko-7b-gguf/blob/main/llama-2-ko-7b.IQ3_S.gguf) | IQ3_S | 2.81GB |
| [llama-2-ko-7b.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/beomi_-_llama-2-ko-7b-gguf/blob/main/llama-2-ko-7b.Q3_K_S.gguf) | Q3_K_S | 2.81GB |
| [llama-2-ko-7b.IQ3_M.gguf](https://huggingface.co/RichardErkhov/beomi_-_llama-2-ko-7b-gguf/blob/main/llama-2-ko-7b.IQ3_M.gguf) | IQ3_M | 2.97GB |
| [llama-2-ko-7b.Q3_K.gguf](https://huggingface.co/RichardErkhov/beomi_-_llama-2-ko-7b-gguf/blob/main/llama-2-ko-7b.Q3_K.gguf) | Q3_K | 3.14GB |
| [llama-2-ko-7b.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/beomi_-_llama-2-ko-7b-gguf/blob/main/llama-2-ko-7b.Q3_K_M.gguf) | Q3_K_M | 3.14GB |
| [llama-2-ko-7b.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/beomi_-_llama-2-ko-7b-gguf/blob/main/llama-2-ko-7b.Q3_K_L.gguf) | Q3_K_L | 3.42GB |
| [llama-2-ko-7b.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/beomi_-_llama-2-ko-7b-gguf/blob/main/llama-2-ko-7b.IQ4_XS.gguf) | IQ4_XS | 3.47GB |
| [llama-2-ko-7b.Q4_0.gguf](https://huggingface.co/RichardErkhov/beomi_-_llama-2-ko-7b-gguf/blob/main/llama-2-ko-7b.Q4_0.gguf) | Q4_0 | 3.64GB |
| [llama-2-ko-7b.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/beomi_-_llama-2-ko-7b-gguf/blob/main/llama-2-ko-7b.IQ4_NL.gguf) | IQ4_NL | 3.66GB |
| [llama-2-ko-7b.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/beomi_-_llama-2-ko-7b-gguf/blob/main/llama-2-ko-7b.Q4_K_S.gguf) | Q4_K_S | 3.67GB |
| [llama-2-ko-7b.Q4_K.gguf](https://huggingface.co/RichardErkhov/beomi_-_llama-2-ko-7b-gguf/blob/main/llama-2-ko-7b.Q4_K.gguf) | Q4_K | 3.88GB |
| [llama-2-ko-7b.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/beomi_-_llama-2-ko-7b-gguf/blob/main/llama-2-ko-7b.Q4_K_M.gguf) | Q4_K_M | 3.88GB |
| [llama-2-ko-7b.Q4_1.gguf](https://huggingface.co/RichardErkhov/beomi_-_llama-2-ko-7b-gguf/blob/main/llama-2-ko-7b.Q4_1.gguf) | Q4_1 | 4.03GB |
| [llama-2-ko-7b.Q5_0.gguf](https://huggingface.co/RichardErkhov/beomi_-_llama-2-ko-7b-gguf/blob/main/llama-2-ko-7b.Q5_0.gguf) | Q5_0 | 4.42GB |
| [llama-2-ko-7b.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/beomi_-_llama-2-ko-7b-gguf/blob/main/llama-2-ko-7b.Q5_K_S.gguf) | Q5_K_S | 4.42GB |
| [llama-2-ko-7b.Q5_K.gguf](https://huggingface.co/RichardErkhov/beomi_-_llama-2-ko-7b-gguf/blob/main/llama-2-ko-7b.Q5_K.gguf) | Q5_K | 4.54GB |
| [llama-2-ko-7b.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/beomi_-_llama-2-ko-7b-gguf/blob/main/llama-2-ko-7b.Q5_K_M.gguf) | Q5_K_M | 4.54GB |
| [llama-2-ko-7b.Q5_1.gguf](https://huggingface.co/RichardErkhov/beomi_-_llama-2-ko-7b-gguf/blob/main/llama-2-ko-7b.Q5_1.gguf) | Q5_1 | 4.8GB |
| [llama-2-ko-7b.Q6_K.gguf](https://huggingface.co/RichardErkhov/beomi_-_llama-2-ko-7b-gguf/blob/main/llama-2-ko-7b.Q6_K.gguf) | Q6_K | 5.24GB |
Original model description:
---
language:
- en
- ko
pipeline_tag: text-generation
inference: false
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
- kollama
- llama-2-ko
---
**Update Log**
- 2023.12.27
- New Model is here! Trained with only open-accessible Korean text corpus: https://huggingface.co/beomi/open-llama-2-ko-7b
- 2023.10.19
- Fix Tokenizer bug(space not applied when decoding) after `transforemrs>=4.34.0`
# **Llama-2-Ko** 🦙🇰🇷
Llama-2-Ko serves as an advanced iteration of Llama 2, benefiting from an expanded vocabulary and the inclusion of a Korean corpus in its further pretraining. Just like its predecessor, Llama-2-Ko operates within the broad range of generative text models that stretch from 7 billion to 70 billion parameters. This repository focuses on the 7B pretrained version, which is tailored to fit the Hugging Face Transformers format. For access to the other models, feel free to consult the index provided below.
## Model Details
**Model Developers** Junbum Lee (Beomi)
**Variations** Llama-2-Ko will come in a range of parameter sizes — 7B, 13B, and 70B — as well as pretrained and fine-tuned variations.
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture**
Llama-2-Ko is an auto-regressive language model that uses an optimized transformer architecture based on Llama-2.
||Training Data|Params|Content Length|GQA|Tokens|LR|
|---|---|---|---|---|---|---|
|Llama 2|*A new mix of Korean online data*|7B|4k|✗|>40B*|1e<sup>-5</sup>|
*Plan to train upto 200B tokens
**Vocab Expansion**
| Model Name | Vocabulary Size | Description |
| --- | --- | --- |
| Original Llama-2 | 32000 | Sentencepiece BPE |
| **Expanded Llama-2-Ko** | 46336 | Sentencepiece BPE. Added Korean vocab and merges |
**Tokenizing "안녕하세요, 오늘은 날씨가 좋네요."**
| Model | Tokens |
| --- | --- |
| Llama-2 | `['▁', '안', '<0xEB>', '<0x85>', '<0x95>', '하', '세', '요', ',', '▁', '오', '<0xEB>', '<0x8A>', '<0x98>', '은', '▁', '<0xEB>', '<0x82>', '<0xA0>', '씨', '가', '▁', '<0xEC>', '<0xA2>', '<0x8B>', '<0xEB>', '<0x84>', '<0xA4>', '요']` |
| Llama-2-Ko | `['▁안녕', '하세요', ',', '▁오늘은', '▁날', '씨가', '▁좋네요']` |
**Tokenizing "Llama 2: Open Foundation and Fine-Tuned Chat Models"**
| Model | Tokens |
| --- | --- |
| Llama-2 | `['▁L', 'l', 'ama', '▁', '2', ':', '▁Open', '▁Foundation', '▁and', '▁Fine', '-', 'T', 'un', 'ed', '▁Ch', 'at', '▁Mod', 'els']` |
| Llama-2-Ko | `['▁L', 'l', 'ama', '▁', '2', ':', '▁Open', '▁Foundation', '▁and', '▁Fine', '-', 'T', 'un', 'ed', '▁Ch', 'at', '▁Mod', 'els']` |
# **Model Benchmark**
## LM Eval Harness - Korean (polyglot branch)
- Used EleutherAI's lm-evaluation-harness https://github.com/EleutherAI/lm-evaluation-harness/tree/polyglot
### NSMC (Acc) - 50000 full test
TBD
### COPA (F1)
<img src=https://user-images.githubusercontent.com/11323660/255575809-c037bc6e-0566-436a-a6c1-2329ac92187a.png style="max-width: 700px; width: 100%" />
| Model | 0-shot | 5-shot | 10-shot | 50-shot |
| --- | --- | --- | --- | --- |
| https://huggingface.co/skt/ko-gpt-trinity-1.2B-v0.5 | 0.6696 | 0.6477 | 0.6419 | 0.6514 |
| https://huggingface.co/kakaobrain/kogpt | 0.7345 | 0.7287 | 0.7277 | 0.7479 |
| https://huggingface.co/facebook/xglm-7.5B | 0.6723 | 0.6731 | 0.6769 | 0.7119 |
| https://huggingface.co/EleutherAI/polyglot-ko-1.3b | 0.7196 | 0.7193 | 0.7204 | 0.7206 |
| https://huggingface.co/EleutherAI/polyglot-ko-3.8b | 0.7595 | 0.7608 | 0.7638 | 0.7788 |
| https://huggingface.co/EleutherAI/polyglot-ko-5.8b | 0.7745 | 0.7676 | 0.7775 | 0.7887 |
| https://huggingface.co/EleutherAI/polyglot-ko-12.8b | 0.7937 | 0.8108 | 0.8037 | 0.8369 |
| Llama-2 Original 7B* | 0.562033 | 0.575982 | 0.576216 | 0.595532 |
| Llama-2-Ko-7b 20B (10k) | 0.738780 | 0.762639 | 0.780761 | 0.797863 |
| Llama-2-Ko-7b 40B (20k) | 0.743630 | 0.792716 | 0.803746 | 0.825944 |
*Llama-2 Original 7B used https://huggingface.co/meta-llama/Llama-2-7b-hf (w/o tokenizer updated)
### HellaSwag (F1)
<img src=https://user-images.githubusercontent.com/11323660/255576090-a2bfc1ae-d117-44b7-9f7b-262e41179ec1.png style="max-width: 700px; width: 100%" />
| Model | 0-shot | 5-shot | 10-shot | 50-shot |
| --- | --- | --- | --- | --- |
| https://huggingface.co/skt/ko-gpt-trinity-1.2B-v0.5 | 0.5243 | 0.5272 | 0.5166 | 0.5352 |
| https://huggingface.co/kakaobrain/kogpt | 0.5590 | 0.5833 | 0.5828 | 0.5907 |
| https://huggingface.co/facebook/xglm-7.5B | 0.5665 | 0.5689 | 0.5565 | 0.5622 |
| https://huggingface.co/EleutherAI/polyglot-ko-1.3b | 0.5247 | 0.5260 | 0.5278 | 0.5427 |
| https://huggingface.co/EleutherAI/polyglot-ko-3.8b | 0.5707 | 0.5830 | 0.5670 | 0.5787 |
| https://huggingface.co/EleutherAI/polyglot-ko-5.8b | 0.5976 | 0.5998 | 0.5979 | 0.6208 |
| https://huggingface.co/EleutherAI/polyglot-ko-12.8b | 0.5954 | 0.6306 | 0.6098 | 0.6118 |
| Llama-2 Original 7B* | 0.415390 | 0.431382 | 0.421342 | 0.442003 |
| Llama-2-Ko-7b 20B (10k) | 0.451757 | 0.466751 | 0.472607 | 0.482776 |
| Llama-2-Ko-7b 40B (20k) | 0.456246 | 0.465665 | 0.469810 | 0.477374 |
*Llama-2 Original 7B used https://huggingface.co/meta-llama/Llama-2-7b-hf (w/o tokenizer updated)
### BoolQ (F1)
<img src=https://user-images.githubusercontent.com/11323660/255576343-5d847a6f-3b6a-41a7-af37-0f11940a5ea4.png style="max-width: 700px; width: 100%" />
| Model | 0-shot | 5-shot | 10-shot | 50-shot |
| --- | --- | --- | --- | --- |
| https://huggingface.co/skt/ko-gpt-trinity-1.2B-v0.5 | 0.3356 | 0.4014 | 0.3640 | 0.3560 |
| https://huggingface.co/kakaobrain/kogpt | 0.4514 | 0.5981 | 0.5499 | 0.5202 |
| https://huggingface.co/facebook/xglm-7.5B | 0.4464 | 0.3324 | 0.3324 | 0.3324 |
| https://huggingface.co/EleutherAI/polyglot-ko-1.3b | 0.3552 | 0.4751 | 0.4109 | 0.4038 |
| https://huggingface.co/EleutherAI/polyglot-ko-3.8b | 0.4320 | 0.5263 | 0.4930 | 0.4038 |
| https://huggingface.co/EleutherAI/polyglot-ko-5.8b | 0.4356 | 0.5698 | 0.5187 | 0.5236 |
| https://huggingface.co/EleutherAI/polyglot-ko-12.8b | 0.4818 | 0.6041 | 0.6289 | 0.6448 |
| Llama-2 Original 7B* | 0.352050 | 0.563238 | 0.474788 | 0.419222 |
| Llama-2-Ko-7b 20B (10k) | 0.360656 | 0.679743 | 0.680109 | 0.662152 |
| Llama-2-Ko-7b 40B (20k) | 0.578640 | 0.697747 | 0.708358 | 0.714423 |
*Llama-2 Original 7B used https://huggingface.co/meta-llama/Llama-2-7b-hf (w/o tokenizer updated)
### SentiNeg (F1)
<img src=https://user-images.githubusercontent.com/11323660/255576572-b005a81d-fa4d-4709-b48a-f0fe4eed17a3.png style="max-width: 700px; width: 100%" />
| Model | 0-shot | 5-shot | 10-shot | 50-shot |
| --- | --- | --- | --- | --- |
| https://huggingface.co/skt/ko-gpt-trinity-1.2B-v0.5 | 0.6065 | 0.6878 | 0.7280 | 0.8413 |
| https://huggingface.co/kakaobrain/kogpt | 0.3747 | 0.8942 | 0.9294 | 0.9698 |
| https://huggingface.co/facebook/xglm-7.5B | 0.3578 | 0.4471 | 0.3964 | 0.5271 |
| https://huggingface.co/EleutherAI/polyglot-ko-1.3b | 0.6790 | 0.6257 | 0.5514 | 0.7851 |
| https://huggingface.co/EleutherAI/polyglot-ko-3.8b | 0.4858 | 0.7950 | 0.7320 | 0.7851 |
| https://huggingface.co/EleutherAI/polyglot-ko-5.8b | 0.3394 | 0.8841 | 0.8808 | 0.9521 |
| https://huggingface.co/EleutherAI/polyglot-ko-12.8b | 0.9117 | 0.9015 | 0.9345 | 0.9723 |
| Llama-2 Original 7B* | 0.347502 | 0.529124 | 0.480641 | 0.788457 |
| Llama-2-Ko-7b 20B (10k) | 0.485546 | 0.829503 | 0.871141 | 0.851253 |
| Llama-2-Ko-7b 40B (20k) | 0.459447 | 0.761079 | 0.727611 | 0.936988 |
*Llama-2 Original 7B used https://huggingface.co/meta-llama/Llama-2-7b-hf (w/o tokenizer updated)
## Note for oobabooga/text-generation-webui
Remove `ValueError` at `load_tokenizer` function(line 109 or near), in `modules/models.py`.
```python
diff --git a/modules/models.py b/modules/models.py
index 232d5fa..de5b7a0 100644
--- a/modules/models.py
+++ b/modules/models.py
@@ -106,7 +106,7 @@ def load_tokenizer(model_name, model):
trust_remote_code=shared.args.trust_remote_code,
use_fast=False
)
- except ValueError:
+ except:
tokenizer = AutoTokenizer.from_pretrained(
path_to_model,
trust_remote_code=shared.args.trust_remote_code,
```
Since Llama-2-Ko uses FastTokenizer provided by HF tokenizers NOT sentencepiece package,
it is required to use `use_fast=True` option when initialize tokenizer.
Apple Sillicon does not support BF16 computing, use CPU instead. (BF16 is supported when using NVIDIA GPU)
## Citation
```
@misc {l._junbum_2023,
author = { {L. Junbum} },
title = { llama-2-ko-7b (Revision 4a9993e) },
year = 2023,
url = { https://huggingface.co/beomi/llama-2-ko-7b },
doi = { 10.57967/hf/1098 },
publisher = { Hugging Face }
}
```
## Acknowledgement
The training is supported by [TPU Research Cloud](https://sites.research.google/trc/) program.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_beomi__llama-2-ko-7b)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 39.43 |
| ARC (25-shot) | 48.46 |
| HellaSwag (10-shot) | 75.28 |
| MMLU (5-shot) | 39.56 |
| TruthfulQA (0-shot) | 34.49 |
| Winogrande (5-shot) | 72.14 |
| GSM8K (5-shot) | 1.97 |
| DROP (3-shot) | 4.1 |
| {} | RichardErkhov/beomi_-_llama-2-ko-7b-gguf | null | [
"gguf",
"region:us"
] | null | 2024-05-03T16:15:23+00:00 | [] | [] | TAGS
#gguf #region-us
| Quantization made by Richard Erkhov.
Github
Discord
Request more models
llama-2-ko-7b - GGUF
* Model creator: URL
* Original model: URL
Name: llama-2-ko-7b.Q2\_K.gguf, Quant method: Q2\_K, Size: 2.42GB
Name: llama-2-ko-7b.IQ3\_XS.gguf, Quant method: IQ3\_XS, Size: 2.67GB
Name: llama-2-ko-7b.IQ3\_S.gguf, Quant method: IQ3\_S, Size: 2.81GB
Name: llama-2-ko-7b.Q3\_K\_S.gguf, Quant method: Q3\_K\_S, Size: 2.81GB
Name: llama-2-ko-7b.IQ3\_M.gguf, Quant method: IQ3\_M, Size: 2.97GB
Name: llama-2-ko-7b.Q3\_K.gguf, Quant method: Q3\_K, Size: 3.14GB
Name: llama-2-ko-7b.Q3\_K\_M.gguf, Quant method: Q3\_K\_M, Size: 3.14GB
Name: llama-2-ko-7b.Q3\_K\_L.gguf, Quant method: Q3\_K\_L, Size: 3.42GB
Name: llama-2-ko-7b.IQ4\_XS.gguf, Quant method: IQ4\_XS, Size: 3.47GB
Name: llama-2-ko-7b.Q4\_0.gguf, Quant method: Q4\_0, Size: 3.64GB
Name: llama-2-ko-7b.IQ4\_NL.gguf, Quant method: IQ4\_NL, Size: 3.66GB
Name: llama-2-ko-7b.Q4\_K\_S.gguf, Quant method: Q4\_K\_S, Size: 3.67GB
Name: llama-2-ko-7b.Q4\_K.gguf, Quant method: Q4\_K, Size: 3.88GB
Name: llama-2-ko-7b.Q4\_K\_M.gguf, Quant method: Q4\_K\_M, Size: 3.88GB
Name: llama-2-ko-7b.Q4\_1.gguf, Quant method: Q4\_1, Size: 4.03GB
Name: llama-2-ko-7b.Q5\_0.gguf, Quant method: Q5\_0, Size: 4.42GB
Name: llama-2-ko-7b.Q5\_K\_S.gguf, Quant method: Q5\_K\_S, Size: 4.42GB
Name: llama-2-ko-7b.Q5\_K.gguf, Quant method: Q5\_K, Size: 4.54GB
Name: llama-2-ko-7b.Q5\_K\_M.gguf, Quant method: Q5\_K\_M, Size: 4.54GB
Name: llama-2-ko-7b.Q5\_1.gguf, Quant method: Q5\_1, Size: 4.8GB
Name: llama-2-ko-7b.Q6\_K.gguf, Quant method: Q6\_K, Size: 5.24GB
Original model description:
---------------------------
language:
* en
* ko
pipeline\_tag: text-generation
inference: false
tags:
* facebook
* meta
* pytorch
* llama
* llama-2
* kollama
* llama-2-ko
---
Update Log
* 2023.12.27
+ New Model is here! Trained with only open-accessible Korean text corpus: URL
* 2023.10.19
+ Fix Tokenizer bug(space not applied when decoding) after 'transforemrs>=4.34.0'
Llama-2-Ko 🇰🇷
=============
Llama-2-Ko serves as an advanced iteration of Llama 2, benefiting from an expanded vocabulary and the inclusion of a Korean corpus in its further pretraining. Just like its predecessor, Llama-2-Ko operates within the broad range of generative text models that stretch from 7 billion to 70 billion parameters. This repository focuses on the 7B pretrained version, which is tailored to fit the Hugging Face Transformers format. For access to the other models, feel free to consult the index provided below.
Model Details
-------------
Model Developers Junbum Lee (Beomi)
Variations Llama-2-Ko will come in a range of parameter sizes — 7B, 13B, and 70B — as well as pretrained and fine-tuned variations.
Input Models input text only.
Output Models generate text only.
Model Architecture
Llama-2-Ko is an auto-regressive language model that uses an optimized transformer architecture based on Llama-2.
Vocab Expansion
Model Name: Original Llama-2, Vocabulary Size: 32000, Description: Sentencepiece BPE
Model Name: Expanded Llama-2-Ko, Vocabulary Size: 46336, Description: Sentencepiece BPE. Added Korean vocab and merges
Tokenizing "안녕하세요, 오늘은 날씨가 좋네요."
Tokenizing "Llama 2: Open Foundation and Fine-Tuned Chat Models"
Model Benchmark
===============
LM Eval Harness - Korean (polyglot branch)
------------------------------------------
* Used EleutherAI's lm-evaluation-harness URL
### NSMC (Acc) - 50000 full test
TBD
### COPA (F1)
![](URL)
### HellaSwag (F1)
![](URL)
### BoolQ (F1)
![](URL)
### SentiNeg (F1)
![](URL)
Note for oobabooga/text-generation-webui
----------------------------------------
Remove 'ValueError' at 'load\_tokenizer' function(line 109 or near), in 'modules/URL'.
Since Llama-2-Ko uses FastTokenizer provided by HF tokenizers NOT sentencepiece package,
it is required to use 'use\_fast=True' option when initialize tokenizer.
Apple Sillicon does not support BF16 computing, use CPU instead. (BF16 is supported when using NVIDIA GPU)
Acknowledgement
---------------
The training is supported by TPU Research Cloud program.
Open LLM Leaderboard Evaluation Results
=======================================
Detailed results can be found here
| [
"### NSMC (Acc) - 50000 full test\n\n\nTBD",
"### COPA (F1)\n\n\n![](URL)",
"### HellaSwag (F1)\n\n\n![](URL)",
"### BoolQ (F1)\n\n\n![](URL)",
"### SentiNeg (F1)\n\n\n![](URL)\n\nNote for oobabooga/text-generation-webui\n----------------------------------------\n\n\nRemove 'ValueError' at 'load\\_tokenizer' function(line 109 or near), in 'modules/URL'.\n\n\nSince Llama-2-Ko uses FastTokenizer provided by HF tokenizers NOT sentencepiece package,\nit is required to use 'use\\_fast=True' option when initialize tokenizer.\n\n\nApple Sillicon does not support BF16 computing, use CPU instead. (BF16 is supported when using NVIDIA GPU)\n\n\nAcknowledgement\n---------------\n\n\nThe training is supported by TPU Research Cloud program.\n\n\nOpen LLM Leaderboard Evaluation Results\n=======================================\n\n\nDetailed results can be found here"
] | [
"TAGS\n#gguf #region-us \n",
"### NSMC (Acc) - 50000 full test\n\n\nTBD",
"### COPA (F1)\n\n\n![](URL)",
"### HellaSwag (F1)\n\n\n![](URL)",
"### BoolQ (F1)\n\n\n![](URL)",
"### SentiNeg (F1)\n\n\n![](URL)\n\nNote for oobabooga/text-generation-webui\n----------------------------------------\n\n\nRemove 'ValueError' at 'load\\_tokenizer' function(line 109 or near), in 'modules/URL'.\n\n\nSince Llama-2-Ko uses FastTokenizer provided by HF tokenizers NOT sentencepiece package,\nit is required to use 'use\\_fast=True' option when initialize tokenizer.\n\n\nApple Sillicon does not support BF16 computing, use CPU instead. (BF16 is supported when using NVIDIA GPU)\n\n\nAcknowledgement\n---------------\n\n\nThe training is supported by TPU Research Cloud program.\n\n\nOpen LLM Leaderboard Evaluation Results\n=======================================\n\n\nDetailed results can be found here"
] |
audio-to-audio | null | Jack from Genndy Tartakovsky's "Samurai Jack" Cartoon Network/Adult Swim show!
For RVC/RVCv2 AI Covers
Made with Weights.gg
Feel free to use! Don't forget to credit if used
Note: Jack's voice is naturally deep so don't forget to adjust your song's pitch so he can fully sound like himself
Model by Radaverse | @samurairad | {"language": ["en"], "license": "openrail", "tags": ["RVC", "RVCv2", "AI", "Cover", "Voice", "Cartoon", "Samurai Jack"], "pipeline_tag": "audio-to-audio"} | Radaverse/SamuraiJack-RVCv2 | null | [
"RVC",
"RVCv2",
"AI",
"Cover",
"Voice",
"Cartoon",
"Samurai Jack",
"audio-to-audio",
"en",
"license:openrail",
"region:us"
] | null | 2024-05-03T16:15:49+00:00 | [] | [
"en"
] | TAGS
#RVC #RVCv2 #AI #Cover #Voice #Cartoon #Samurai Jack #audio-to-audio #en #license-openrail #region-us
| Jack from Genndy Tartakovsky's "Samurai Jack" Cartoon Network/Adult Swim show!
For RVC/RVCv2 AI Covers
Made with URL
Feel free to use! Don't forget to credit if used
Note: Jack's voice is naturally deep so don't forget to adjust your song's pitch so he can fully sound like himself
Model by Radaverse | @samurairad | [] | [
"TAGS\n#RVC #RVCv2 #AI #Cover #Voice #Cartoon #Samurai Jack #audio-to-audio #en #license-openrail #region-us \n"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K79me3-seqsight_65536_512_47M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_EMP_H3K79me3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K79me3) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4367
- F1 Score: 0.8170
- Accuracy: 0.8173
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5342 | 1.1 | 200 | 0.4669 | 0.8027 | 0.8027 |
| 0.4727 | 2.21 | 400 | 0.4585 | 0.7966 | 0.7982 |
| 0.4678 | 3.31 | 600 | 0.4500 | 0.8050 | 0.8058 |
| 0.4585 | 4.42 | 800 | 0.4482 | 0.8051 | 0.8062 |
| 0.4586 | 5.52 | 1000 | 0.4469 | 0.8050 | 0.8062 |
| 0.4519 | 6.63 | 1200 | 0.4499 | 0.8032 | 0.8048 |
| 0.4567 | 7.73 | 1400 | 0.4412 | 0.8097 | 0.8103 |
| 0.4482 | 8.84 | 1600 | 0.4460 | 0.8039 | 0.8051 |
| 0.4492 | 9.94 | 1800 | 0.4426 | 0.8105 | 0.8103 |
| 0.4476 | 11.05 | 2000 | 0.4397 | 0.8074 | 0.8083 |
| 0.4472 | 12.15 | 2200 | 0.4359 | 0.8109 | 0.8114 |
| 0.4424 | 13.26 | 2400 | 0.4347 | 0.8093 | 0.8100 |
| 0.4412 | 14.36 | 2600 | 0.4350 | 0.8097 | 0.8100 |
| 0.4441 | 15.47 | 2800 | 0.4438 | 0.8012 | 0.8031 |
| 0.4389 | 16.57 | 3000 | 0.4347 | 0.8085 | 0.8089 |
| 0.4408 | 17.68 | 3200 | 0.4338 | 0.8093 | 0.8100 |
| 0.4352 | 18.78 | 3400 | 0.4318 | 0.8126 | 0.8128 |
| 0.4363 | 19.89 | 3600 | 0.4363 | 0.8085 | 0.8096 |
| 0.4377 | 20.99 | 3800 | 0.4340 | 0.8094 | 0.8100 |
| 0.4367 | 22.1 | 4000 | 0.4326 | 0.8103 | 0.8110 |
| 0.4356 | 23.2 | 4200 | 0.4325 | 0.8113 | 0.8121 |
| 0.436 | 24.31 | 4400 | 0.4342 | 0.8125 | 0.8131 |
| 0.4275 | 25.41 | 4600 | 0.4359 | 0.8140 | 0.8148 |
| 0.4331 | 26.52 | 4800 | 0.4318 | 0.8132 | 0.8135 |
| 0.4341 | 27.62 | 5000 | 0.4310 | 0.8130 | 0.8135 |
| 0.4297 | 28.73 | 5200 | 0.4298 | 0.8112 | 0.8117 |
| 0.428 | 29.83 | 5400 | 0.4309 | 0.8138 | 0.8141 |
| 0.4299 | 30.94 | 5600 | 0.4318 | 0.8105 | 0.8107 |
| 0.4299 | 32.04 | 5800 | 0.4303 | 0.8141 | 0.8141 |
| 0.4309 | 33.15 | 6000 | 0.4284 | 0.8149 | 0.8152 |
| 0.4284 | 34.25 | 6200 | 0.4307 | 0.8125 | 0.8128 |
| 0.4275 | 35.36 | 6400 | 0.4322 | 0.8123 | 0.8131 |
| 0.4272 | 36.46 | 6600 | 0.4292 | 0.8162 | 0.8162 |
| 0.4286 | 37.57 | 6800 | 0.4303 | 0.8141 | 0.8145 |
| 0.4263 | 38.67 | 7000 | 0.4320 | 0.8136 | 0.8141 |
| 0.4246 | 39.78 | 7200 | 0.4304 | 0.8165 | 0.8166 |
| 0.4268 | 40.88 | 7400 | 0.4290 | 0.8150 | 0.8152 |
| 0.4263 | 41.99 | 7600 | 0.4290 | 0.8153 | 0.8155 |
| 0.4243 | 43.09 | 7800 | 0.4303 | 0.8161 | 0.8166 |
| 0.4262 | 44.2 | 8000 | 0.4295 | 0.8141 | 0.8145 |
| 0.4233 | 45.3 | 8200 | 0.4301 | 0.8152 | 0.8155 |
| 0.4256 | 46.41 | 8400 | 0.4286 | 0.8148 | 0.8152 |
| 0.4238 | 47.51 | 8600 | 0.4293 | 0.8156 | 0.8159 |
| 0.4236 | 48.62 | 8800 | 0.4312 | 0.8136 | 0.8141 |
| 0.4221 | 49.72 | 9000 | 0.4301 | 0.8142 | 0.8145 |
| 0.4283 | 50.83 | 9200 | 0.4296 | 0.8131 | 0.8135 |
| 0.4232 | 51.93 | 9400 | 0.4299 | 0.8142 | 0.8145 |
| 0.4238 | 53.04 | 9600 | 0.4297 | 0.8142 | 0.8145 |
| 0.4218 | 54.14 | 9800 | 0.4295 | 0.8149 | 0.8152 |
| 0.424 | 55.25 | 10000 | 0.4300 | 0.8145 | 0.8148 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_EMP_H3K79me3-seqsight_65536_512_47M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K79me3-seqsight_65536_512_47M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_65536_512_47M",
"region:us"
] | null | 2024-05-03T16:16:34+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us
| GUE\_EMP\_H3K79me3-seqsight\_65536\_512\_47M-L1\_f
==================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_65536\_512\_47M on the mahdibaghbanzadeh/GUE\_EMP\_H3K79me3 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4367
* F1 Score: 0.8170
* Accuracy: 0.8173
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text-generation | transformers |
<img src="./llama-3-merges.webp" alt="Llama-3 DPO Logo" width="500" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
# MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.4
This model is a fine-tune (DPO) of `meta-llama/Meta-Llama-3-70B-Instruct` model.
# Quantized GGUF
All GGUF models are available here: [MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.4-GGUF](https://huggingface.co/MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.4-GGUF)
# Prompt Template
This model uses `ChatML` prompt template:
```
<|im_start|>system
{System}
<|im_end|>
<|im_start|>user
{User}
<|im_end|>
<|im_start|>assistant
{Assistant}
````
# How to use
You can use this model by using `MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.4` as the model name in Hugging Face's
transformers library.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
from transformers import pipeline
import torch
model_id = "MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.4"
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
trust_remote_code=True,
# attn_implementation="flash_attention_2"
)
tokenizer = AutoTokenizer.from_pretrained(
model_id,
trust_remote_code=True
)
streamer = TextStreamer(tokenizer)
pipeline = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
model_kwargs={"torch_dtype": torch.bfloat16},
streamer=streamer
)
# Then you can use the pipeline to generate text.
messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Who are you?"},
]
prompt = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
terminators = [
tokenizer.eos_token_id,
tokenizer.convert_tokens_to_ids("<|im_end|>"),
tokenizer.convert_tokens_to_ids("<|eot_id|>") # safer to have this too
]
outputs = pipeline(
prompt,
max_new_tokens=2048,
eos_token_id=terminators,
do_sample=True,
temperature=0.6,
top_p=0.95,
)
print(outputs[0]["generated_text"][len(prompt):])
```
## Sample Response
```
<|im_start|>system
You are a helpful assistant.<|im_end|>
<|im_start|>user
describe about pros and cons of docker system.<|im_end|>
<|im_start|>assistant
```
```
Here are the pros and cons of the Docker system:
**Pros:**
1. **Lightweight**: Docker containers are lightweight and use fewer resources compared to traditional virtual machines.
2. **Isolation**: Docker provides a high level of isolation between containers, ensuring that each application runs independently without interfering with others.
3. **Portability**: Docker containers are highly portable and can run on any system that supports Docker, without requiring specific dependencies or configurations.
4. **Efficient Use of Resources**: Docker containers can share the same kernel and use fewer resources, making them more efficient than traditional virtual machines.
5. **Fast Deployment**: Docker containers can be spun up quickly, making it ideal for rapid deployment and scaling of applications.
6. **Easy Maintenance**: Docker provides a simple way to manage and update applications, with features like rolling updates and rollbacks.
7. **Security**: Docker provides a secure environment for applications, with features like network isolation and encryption.
8. **Large Ecosystem**: Docker has a large and active ecosystem, with a wide range of tools and plugins available for various tasks.
9. **Version Control**: Docker provides an easy way to manage different versions of an application, with features like version tagging and rollbacks.
10. **Cost-Effective**: Docker can help reduce costs by minimizing the need for virtual machines and reducing the overhead of resource allocation.
**Cons:**
1. **Steep Learning Curve**: Docker has a complex architecture and requires a good understanding of containerization, networking, and storage.
2. **Overhead**: While Docker containers are lightweight, they still require some overhead in terms of resources and management.
3. **Security Risks**: If not properly configured, Docker containers can introduce security risks, such as unauthorized access to the host system.
4. **Limited Support for GUI Applications**: Docker is primarily designed for running command-line applications, and running GUI applications can be challenging.
5. **Networking Complexity**: Docker's networking model can be complex, especially in large-scale deployments.
6. **Storage Complexity**: Docker's storage model can be complex, especially when dealing with persistent data.
7. **Limited Support for Legacy Applications**: Docker is designed for modern, cloud-native applications, and may not be suitable for legacy applications that rely on older technologies.
8. **Dependence on Docker Hub**: Docker relies heavily on Docker Hub, which can be a single point of failure if the hub is down or experiencing issues.
9. **Orchestration Complexity**: Docker provides a built-in orchestration tool, Docker Swarm, but it can be complex to set up and manage.
10. **Vendor Lock-in**: Docker is a proprietary technology, and while it has a large ecosystem, it can be difficult to switch to alternative containerization platforms.
Overall, Docker provides a powerful and flexible way to deploy and manage applications, but it requires careful planning, configuration, and management to ensure optimal performance and security.
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_MaziyarPanahi__Llama-3-70B-Instruct-DPO-v0.4)
| Metric |Value|
|---------------------------------|----:|
|Avg. |78.89|
|AI2 Reasoning Challenge (25-Shot)|72.61|
|HellaSwag (10-Shot) |86.03|
|MMLU (5-Shot) |80.50|
|TruthfulQA (0-shot) |63.26|
|Winogrande (5-shot) |83.58|
|GSM8k (5-shot) |87.34|
| {"language": ["en"], "license": "llama3", "library_name": "transformers", "tags": ["axolotl", "finetune", "dpo", "facebook", "meta", "pytorch", "llama", "llama-3", "chatml"], "datasets": ["argilla/ultrafeedback-binarized-preferences"], "base_model": "meta-llama/Meta-Llama-3-70B-Instruct", "pipeline_tag": "text-generation", "license_name": "llama3", "license_link": "LICENSE", "inference": false, "model_creator": "MaziyarPanahi", "quantized_by": "MaziyarPanahi", "model-index": [{"name": "Llama-3-70B-Instruct-DPO-v0.4", "results": [{"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "AI2 Reasoning Challenge (25-Shot)", "type": "ai2_arc", "config": "ARC-Challenge", "split": "test", "args": {"num_few_shot": 25}}, "metrics": [{"type": "acc_norm", "value": 72.61, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.4", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "HellaSwag (10-Shot)", "type": "hellaswag", "split": "validation", "args": {"num_few_shot": 10}}, "metrics": [{"type": "acc_norm", "value": 86.03, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.4", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "MMLU (5-Shot)", "type": "cais/mmlu", "config": "all", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 80.5, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.4", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "TruthfulQA (0-shot)", "type": "truthful_qa", "config": "multiple_choice", "split": "validation", "args": {"num_few_shot": 0}}, "metrics": [{"type": "mc2", "value": 63.26}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.4", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "Winogrande (5-shot)", "type": "winogrande", "config": "winogrande_xl", "split": "validation", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 83.58, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.4", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "GSM8k (5-shot)", "type": "gsm8k", "config": "main", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 87.34, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.4", "name": "Open LLM Leaderboard"}}]}]} | blockblockblock/Llama-3-70B-Instruct-DPO-v0.4-bpw2.5-exl2 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"axolotl",
"finetune",
"dpo",
"facebook",
"meta",
"pytorch",
"llama-3",
"chatml",
"conversational",
"en",
"dataset:argilla/ultrafeedback-binarized-preferences",
"base_model:meta-llama/Meta-Llama-3-70B-Instruct",
"license:llama3",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-03T16:17:38+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #llama #text-generation #axolotl #finetune #dpo #facebook #meta #pytorch #llama-3 #chatml #conversational #en #dataset-argilla/ultrafeedback-binarized-preferences #base_model-meta-llama/Meta-Llama-3-70B-Instruct #license-llama3 #model-index #autotrain_compatible #text-generation-inference #region-us
| ![Llama-3 DPO Logo](./URL)
MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.4
===========================================
This model is a fine-tune (DPO) of 'meta-llama/Meta-Llama-3-70B-Instruct' model.
Quantized GGUF
==============
All GGUF models are available here: MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.4-GGUF
Prompt Template
===============
This model uses 'ChatML' prompt template:
'
How to use
==========
You can use this model by using 'MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.4' as the model name in Hugging Face's
transformers library.
Sample Response
---------------
Open LLM Leaderboard Evaluation Results
=======================================
Detailed results can be found here
| [] | [
"TAGS\n#transformers #safetensors #llama #text-generation #axolotl #finetune #dpo #facebook #meta #pytorch #llama-3 #chatml #conversational #en #dataset-argilla/ultrafeedback-binarized-preferences #base_model-meta-llama/Meta-Llama-3-70B-Instruct #license-llama3 #model-index #autotrain_compatible #text-generation-inference #region-us \n"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | ar08/ar08 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-03T16:19:14+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K79me3-seqsight_65536_512_47M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_EMP_H3K79me3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K79me3) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4309
- F1 Score: 0.8185
- Accuracy: 0.8187
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5148 | 1.1 | 200 | 0.4601 | 0.8090 | 0.8089 |
| 0.4624 | 2.21 | 400 | 0.4479 | 0.8057 | 0.8065 |
| 0.4592 | 3.31 | 600 | 0.4440 | 0.8066 | 0.8076 |
| 0.4474 | 4.42 | 800 | 0.4400 | 0.8033 | 0.8044 |
| 0.4463 | 5.52 | 1000 | 0.4429 | 0.8030 | 0.8048 |
| 0.4386 | 6.63 | 1200 | 0.4471 | 0.8024 | 0.8048 |
| 0.4403 | 7.73 | 1400 | 0.4353 | 0.8077 | 0.8089 |
| 0.4328 | 8.84 | 1600 | 0.4478 | 0.8019 | 0.8041 |
| 0.4305 | 9.94 | 1800 | 0.4266 | 0.8190 | 0.8190 |
| 0.4265 | 11.05 | 2000 | 0.4371 | 0.8041 | 0.8051 |
| 0.4265 | 12.15 | 2200 | 0.4269 | 0.8185 | 0.8183 |
| 0.4207 | 13.26 | 2400 | 0.4243 | 0.8151 | 0.8155 |
| 0.4176 | 14.36 | 2600 | 0.4245 | 0.8184 | 0.8183 |
| 0.4192 | 15.47 | 2800 | 0.4285 | 0.8111 | 0.8117 |
| 0.414 | 16.57 | 3000 | 0.4283 | 0.8175 | 0.8173 |
| 0.4149 | 17.68 | 3200 | 0.4244 | 0.8161 | 0.8162 |
| 0.4094 | 18.78 | 3400 | 0.4262 | 0.8172 | 0.8176 |
| 0.4091 | 19.89 | 3600 | 0.4239 | 0.8140 | 0.8141 |
| 0.4087 | 20.99 | 3800 | 0.4302 | 0.8091 | 0.8100 |
| 0.4076 | 22.1 | 4000 | 0.4246 | 0.8108 | 0.8114 |
| 0.4059 | 23.2 | 4200 | 0.4253 | 0.8144 | 0.8148 |
| 0.4057 | 24.31 | 4400 | 0.4300 | 0.8124 | 0.8131 |
| 0.3982 | 25.41 | 4600 | 0.4299 | 0.8159 | 0.8162 |
| 0.4019 | 26.52 | 4800 | 0.4289 | 0.8184 | 0.8187 |
| 0.4036 | 27.62 | 5000 | 0.4294 | 0.8112 | 0.8121 |
| 0.3975 | 28.73 | 5200 | 0.4243 | 0.8114 | 0.8121 |
| 0.3938 | 29.83 | 5400 | 0.4255 | 0.8134 | 0.8138 |
| 0.3966 | 30.94 | 5600 | 0.4280 | 0.8160 | 0.8162 |
| 0.3953 | 32.04 | 5800 | 0.4275 | 0.8214 | 0.8214 |
| 0.3972 | 33.15 | 6000 | 0.4261 | 0.8150 | 0.8155 |
| 0.3931 | 34.25 | 6200 | 0.4297 | 0.8170 | 0.8173 |
| 0.3914 | 35.36 | 6400 | 0.4287 | 0.8140 | 0.8145 |
| 0.393 | 36.46 | 6600 | 0.4275 | 0.8181 | 0.8183 |
| 0.3901 | 37.57 | 6800 | 0.4299 | 0.8136 | 0.8141 |
| 0.3893 | 38.67 | 7000 | 0.4314 | 0.8153 | 0.8159 |
| 0.3881 | 39.78 | 7200 | 0.4304 | 0.8184 | 0.8187 |
| 0.3886 | 40.88 | 7400 | 0.4277 | 0.8189 | 0.8190 |
| 0.3859 | 41.99 | 7600 | 0.4314 | 0.8162 | 0.8166 |
| 0.3869 | 43.09 | 7800 | 0.4308 | 0.8169 | 0.8173 |
| 0.3859 | 44.2 | 8000 | 0.4329 | 0.8149 | 0.8155 |
| 0.3839 | 45.3 | 8200 | 0.4341 | 0.8159 | 0.8162 |
| 0.3871 | 46.41 | 8400 | 0.4291 | 0.8184 | 0.8187 |
| 0.3848 | 47.51 | 8600 | 0.4327 | 0.8172 | 0.8176 |
| 0.3837 | 48.62 | 8800 | 0.4334 | 0.8164 | 0.8169 |
| 0.383 | 49.72 | 9000 | 0.4334 | 0.8158 | 0.8162 |
| 0.388 | 50.83 | 9200 | 0.4328 | 0.8160 | 0.8166 |
| 0.3826 | 51.93 | 9400 | 0.4316 | 0.8169 | 0.8173 |
| 0.3819 | 53.04 | 9600 | 0.4315 | 0.8166 | 0.8169 |
| 0.3815 | 54.14 | 9800 | 0.4318 | 0.8170 | 0.8173 |
| 0.3831 | 55.25 | 10000 | 0.4325 | 0.8166 | 0.8169 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_EMP_H3K79me3-seqsight_65536_512_47M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K79me3-seqsight_65536_512_47M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_65536_512_47M",
"region:us"
] | null | 2024-05-03T16:19:22+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us
| GUE\_EMP\_H3K79me3-seqsight\_65536\_512\_47M-L8\_f
==================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_65536\_512\_47M on the mahdibaghbanzadeh/GUE\_EMP\_H3K79me3 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4309
* F1 Score: 0.8185
* Accuracy: 0.8187
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K79me3-seqsight_65536_512_47M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_EMP_H3K79me3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K79me3) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4374
- F1 Score: 0.8193
- Accuracy: 0.8193
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5042 | 1.1 | 200 | 0.4523 | 0.8134 | 0.8135 |
| 0.4575 | 2.21 | 400 | 0.4432 | 0.8135 | 0.8135 |
| 0.453 | 3.31 | 600 | 0.4381 | 0.8068 | 0.8079 |
| 0.4367 | 4.42 | 800 | 0.4333 | 0.8096 | 0.8107 |
| 0.4327 | 5.52 | 1000 | 0.4302 | 0.8134 | 0.8145 |
| 0.423 | 6.63 | 1200 | 0.4528 | 0.8043 | 0.8065 |
| 0.4233 | 7.73 | 1400 | 0.4418 | 0.8010 | 0.8031 |
| 0.4154 | 8.84 | 1600 | 0.4534 | 0.7936 | 0.7961 |
| 0.4116 | 9.94 | 1800 | 0.4231 | 0.8144 | 0.8145 |
| 0.4052 | 11.05 | 2000 | 0.4394 | 0.8028 | 0.8037 |
| 0.4028 | 12.15 | 2200 | 0.4245 | 0.8196 | 0.8197 |
| 0.397 | 13.26 | 2400 | 0.4251 | 0.8144 | 0.8148 |
| 0.3917 | 14.36 | 2600 | 0.4285 | 0.8201 | 0.8200 |
| 0.3907 | 15.47 | 2800 | 0.4296 | 0.8129 | 0.8131 |
| 0.3827 | 16.57 | 3000 | 0.4302 | 0.8171 | 0.8169 |
| 0.3821 | 17.68 | 3200 | 0.4380 | 0.8186 | 0.8187 |
| 0.3754 | 18.78 | 3400 | 0.4418 | 0.8105 | 0.8110 |
| 0.371 | 19.89 | 3600 | 0.4367 | 0.8177 | 0.8176 |
| 0.3684 | 20.99 | 3800 | 0.4477 | 0.8107 | 0.8110 |
| 0.3639 | 22.1 | 4000 | 0.4422 | 0.8158 | 0.8159 |
| 0.3605 | 23.2 | 4200 | 0.4480 | 0.8144 | 0.8145 |
| 0.3561 | 24.31 | 4400 | 0.4502 | 0.8163 | 0.8166 |
| 0.3478 | 25.41 | 4600 | 0.4584 | 0.8175 | 0.8173 |
| 0.3503 | 26.52 | 4800 | 0.4596 | 0.8121 | 0.8121 |
| 0.3491 | 27.62 | 5000 | 0.4524 | 0.8113 | 0.8117 |
| 0.3407 | 28.73 | 5200 | 0.4644 | 0.8110 | 0.8117 |
| 0.3349 | 29.83 | 5400 | 0.4509 | 0.8151 | 0.8152 |
| 0.3364 | 30.94 | 5600 | 0.4585 | 0.8171 | 0.8169 |
| 0.3328 | 32.04 | 5800 | 0.4492 | 0.8199 | 0.8197 |
| 0.3307 | 33.15 | 6000 | 0.4530 | 0.8164 | 0.8166 |
| 0.3277 | 34.25 | 6200 | 0.4746 | 0.8175 | 0.8173 |
| 0.3223 | 35.36 | 6400 | 0.4711 | 0.8181 | 0.8183 |
| 0.3192 | 36.46 | 6600 | 0.4757 | 0.8187 | 0.8187 |
| 0.3178 | 37.57 | 6800 | 0.4753 | 0.8139 | 0.8141 |
| 0.3153 | 38.67 | 7000 | 0.4703 | 0.8165 | 0.8169 |
| 0.3129 | 39.78 | 7200 | 0.4812 | 0.8196 | 0.8197 |
| 0.3105 | 40.88 | 7400 | 0.4763 | 0.8143 | 0.8141 |
| 0.3064 | 41.99 | 7600 | 0.4652 | 0.8180 | 0.8180 |
| 0.306 | 43.09 | 7800 | 0.4787 | 0.8145 | 0.8145 |
| 0.3041 | 44.2 | 8000 | 0.4898 | 0.8150 | 0.8152 |
| 0.3014 | 45.3 | 8200 | 0.4882 | 0.8173 | 0.8173 |
| 0.3005 | 46.41 | 8400 | 0.4859 | 0.8173 | 0.8173 |
| 0.3006 | 47.51 | 8600 | 0.4895 | 0.8143 | 0.8145 |
| 0.2973 | 48.62 | 8800 | 0.4882 | 0.8124 | 0.8124 |
| 0.2961 | 49.72 | 9000 | 0.4937 | 0.8140 | 0.8141 |
| 0.3008 | 50.83 | 9200 | 0.4829 | 0.8128 | 0.8131 |
| 0.2934 | 51.93 | 9400 | 0.4918 | 0.8133 | 0.8135 |
| 0.2928 | 53.04 | 9600 | 0.4910 | 0.8149 | 0.8148 |
| 0.2936 | 54.14 | 9800 | 0.4936 | 0.8156 | 0.8155 |
| 0.2934 | 55.25 | 10000 | 0.4941 | 0.8135 | 0.8135 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_EMP_H3K79me3-seqsight_65536_512_47M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K79me3-seqsight_65536_512_47M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_65536_512_47M",
"region:us"
] | null | 2024-05-03T16:19:50+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us
| GUE\_EMP\_H3K79me3-seqsight\_65536\_512\_47M-L32\_f
===================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_65536\_512\_47M on the mahdibaghbanzadeh/GUE\_EMP\_H3K79me3 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4374
* F1 Score: 0.8193
* Accuracy: 0.8193
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | InayaKripa/gemma-2b-toxic-ConvoV1 | null | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-03T16:23:48+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #gemma #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #gemma #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-classification | transformers |
This model is a finetuned version of ```distilbert/distilbert-base-multilingual-cased``` model in the **Bengali** and **Hindi** languages.
The dataset used is a Kaggle Dataset - [Modified-hate-speech-bengali-hindi](https://www.kaggle.com/datasets/abirmondal/modified-hate-speech-bengali-hindi)
This model can classify Bengali and Hindi texts into the following 5 classes:
- defamation
- hate
- non-hate
- violence
- vulgar | {"language": ["bn", "hi"], "license": "apache-2.0"} | kingshukroy/distilbert-base-multilingual-cased-hate-speech-ben-hin | null | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"bn",
"hi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T16:23:54+00:00 | [] | [
"bn",
"hi"
] | TAGS
#transformers #safetensors #distilbert #text-classification #bn #hi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
This model is a finetuned version of model in the Bengali and Hindi languages.
The dataset used is a Kaggle Dataset - Modified-hate-speech-bengali-hindi
This model can classify Bengali and Hindi texts into the following 5 classes:
- defamation
- hate
- non-hate
- violence
- vulgar | [] | [
"TAGS\n#transformers #safetensors #distilbert #text-classification #bn #hi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral_finetued_on_scigen_server
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 256
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 256
- total_train_batch_size: 65536
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 30
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1 | {"license": "apache-2.0", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "mistralai/Mistral-7B-Instruct-v0.2", "model-index": [{"name": "mistral_finetued_on_scigen_server", "results": []}]} | moetezsa/mistral_finetued_on_scigen_server | null | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"region:us"
] | null | 2024-05-03T16:24:44+00:00 | [] | [] | TAGS
#peft #safetensors #trl #sft #generated_from_trainer #base_model-mistralai/Mistral-7B-Instruct-v0.2 #license-apache-2.0 #region-us
|
# mistral_finetued_on_scigen_server
This model is a fine-tuned version of mistralai/Mistral-7B-Instruct-v0.2 on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 256
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 256
- total_train_batch_size: 65536
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 30
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1 | [
"# mistral_finetued_on_scigen_server\n\nThis model is a fine-tuned version of mistralai/Mistral-7B-Instruct-v0.2 on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-06\n- train_batch_size: 256\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 256\n- total_train_batch_size: 65536\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 30",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.40.1\n- Pytorch 2.3.0+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] | [
"TAGS\n#peft #safetensors #trl #sft #generated_from_trainer #base_model-mistralai/Mistral-7B-Instruct-v0.2 #license-apache-2.0 #region-us \n",
"# mistral_finetued_on_scigen_server\n\nThis model is a fine-tuned version of mistralai/Mistral-7B-Instruct-v0.2 on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-06\n- train_batch_size: 256\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 256\n- total_train_batch_size: 65536\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 30",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.40.1\n- Pytorch 2.3.0+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Prototipo_5_EMI
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-uncased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4215
- Accuracy: 0.538
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 20
- eval_batch_size: 20
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 1.2459 | 0.1481 | 200 | 1.2168 | 0.4493 |
| 1.1445 | 0.2963 | 400 | 1.0823 | 0.512 |
| 1.1117 | 0.4444 | 600 | 1.0979 | 0.5053 |
| 1.0618 | 0.5926 | 800 | 1.0457 | 0.5273 |
| 1.0343 | 0.7407 | 1000 | 1.0219 | 0.537 |
| 1.1239 | 0.8889 | 1200 | 1.0353 | 0.5257 |
| 0.9012 | 1.0370 | 1400 | 1.0637 | 0.5383 |
| 0.86 | 1.1852 | 1600 | 1.0682 | 0.5333 |
| 0.898 | 1.3333 | 1800 | 1.0341 | 0.5483 |
| 0.929 | 1.4815 | 2000 | 1.0437 | 0.5363 |
| 0.9921 | 1.6296 | 2200 | 0.9968 | 0.5473 |
| 0.9776 | 1.7778 | 2400 | 1.0418 | 0.5553 |
| 0.9166 | 1.9259 | 2600 | 0.9874 | 0.5573 |
| 0.703 | 2.0741 | 2800 | 1.0564 | 0.556 |
| 0.8123 | 2.2222 | 3000 | 1.0582 | 0.561 |
| 0.6727 | 2.3704 | 3200 | 1.0942 | 0.5483 |
| 0.6843 | 2.5185 | 3400 | 1.1128 | 0.558 |
| 0.7528 | 2.6667 | 3600 | 1.0823 | 0.5547 |
| 0.7747 | 2.8148 | 3800 | 1.0744 | 0.5497 |
| 0.7471 | 2.9630 | 4000 | 1.0749 | 0.5527 |
| 0.5774 | 3.1111 | 4200 | 1.1422 | 0.552 |
| 0.6105 | 3.2593 | 4400 | 1.2226 | 0.543 |
| 0.573 | 3.4074 | 4600 | 1.2427 | 0.5417 |
| 0.6047 | 3.5556 | 4800 | 1.2403 | 0.537 |
| 0.5334 | 3.7037 | 5000 | 1.2470 | 0.5413 |
| 0.5688 | 3.8519 | 5200 | 1.2585 | 0.5507 |
| 0.4928 | 4.0 | 5400 | 1.2653 | 0.5437 |
| 0.4314 | 4.1481 | 5600 | 1.3419 | 0.541 |
| 0.4556 | 4.2963 | 5800 | 1.3677 | 0.5413 |
| 0.4815 | 4.4444 | 6000 | 1.3912 | 0.5407 |
| 0.4431 | 4.5926 | 6200 | 1.4004 | 0.5347 |
| 0.4312 | 4.7407 | 6400 | 1.4161 | 0.5397 |
| 0.459 | 4.8889 | 6600 | 1.4215 | 0.538 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "dccuchile/bert-base-spanish-wwm-uncased", "model-index": [{"name": "Prototipo_5_EMI", "results": []}]} | Armandodelca/Prototipo_5_EMI | null | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:dccuchile/bert-base-spanish-wwm-uncased",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T16:25:41+00:00 | [] | [] | TAGS
#transformers #safetensors #bert #text-classification #generated_from_trainer #base_model-dccuchile/bert-base-spanish-wwm-uncased #autotrain_compatible #endpoints_compatible #region-us
| Prototipo\_5\_EMI
=================
This model is a fine-tuned version of dccuchile/bert-base-spanish-wwm-uncased on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 1.4215
* Accuracy: 0.538
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 20
* eval\_batch\_size: 20
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 500
* num\_epochs: 5
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.40.1
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 20\n* eval\\_batch\\_size: 20\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 5\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #safetensors #bert #text-classification #generated_from_trainer #base_model-dccuchile/bert-base-spanish-wwm-uncased #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 20\n* eval\\_batch\\_size: 20\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 5\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
text-generation | transformers | Barcenas 3.8b
Based on the Phi-3-Mini-4K-Boost from DeepMount00 and trained with the pinzhenchen/alpaca-cleaned-es dataset, to improve Spanish conversations.
The goal of this model is to have a small LLM that can express itself correctly and fluently in the Spanish language.
Made with ❤️ in Guadalupe, Nuevo Leon, Mexico 🇲🇽 | {"language": ["es", "en"], "license": "mit", "tags": ["phi"]} | Danielbrdz/Barcenas-3.8b | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"phi",
"conversational",
"es",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-03T16:25:51+00:00 | [] | [
"es",
"en"
] | TAGS
#transformers #safetensors #mistral #text-generation #phi #conversational #es #en #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| Barcenas 3.8b
Based on the Phi-3-Mini-4K-Boost from DeepMount00 and trained with the pinzhenchen/alpaca-cleaned-es dataset, to improve Spanish conversations.
The goal of this model is to have a small LLM that can express itself correctly and fluently in the Spanish language.
Made with ️ in Guadalupe, Nuevo Leon, Mexico 🇲🇽 | [] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #phi #conversational #es #en #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | OwOpeepeepoopoo/herewegoagain15c | null | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T16:25:57+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | golf2248/l9w0l2v | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-03T16:27:49+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K4me1-seqsight_65536_512_47M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_EMP_H3K4me1](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K4me1) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5417
- F1 Score: 0.7453
- Accuracy: 0.7465
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.6298 | 1.01 | 200 | 0.6057 | 0.6909 | 0.6929 |
| 0.6045 | 2.02 | 400 | 0.6003 | 0.6998 | 0.7004 |
| 0.5948 | 3.03 | 600 | 0.5930 | 0.7061 | 0.7074 |
| 0.5909 | 4.04 | 800 | 0.5868 | 0.7121 | 0.7146 |
| 0.5809 | 5.05 | 1000 | 0.5821 | 0.7155 | 0.7172 |
| 0.578 | 6.06 | 1200 | 0.5811 | 0.7148 | 0.7191 |
| 0.573 | 7.07 | 1400 | 0.5783 | 0.7180 | 0.7213 |
| 0.5737 | 8.08 | 1600 | 0.5748 | 0.7206 | 0.7235 |
| 0.5703 | 9.09 | 1800 | 0.5703 | 0.7262 | 0.7279 |
| 0.5664 | 10.1 | 2000 | 0.5725 | 0.7213 | 0.7222 |
| 0.5643 | 11.11 | 2200 | 0.5712 | 0.7248 | 0.7270 |
| 0.5647 | 12.12 | 2400 | 0.5695 | 0.7278 | 0.7292 |
| 0.563 | 13.13 | 2600 | 0.5682 | 0.7251 | 0.7270 |
| 0.5629 | 14.14 | 2800 | 0.5641 | 0.7292 | 0.7314 |
| 0.5582 | 15.15 | 3000 | 0.5625 | 0.7289 | 0.7307 |
| 0.5586 | 16.16 | 3200 | 0.5639 | 0.7267 | 0.7295 |
| 0.5564 | 17.17 | 3400 | 0.5630 | 0.7295 | 0.7323 |
| 0.5565 | 18.18 | 3600 | 0.5582 | 0.7325 | 0.7336 |
| 0.5531 | 19.19 | 3800 | 0.5613 | 0.7311 | 0.7336 |
| 0.5546 | 20.2 | 4000 | 0.5590 | 0.7314 | 0.7330 |
| 0.5507 | 21.21 | 4200 | 0.5631 | 0.7337 | 0.7367 |
| 0.552 | 22.22 | 4400 | 0.5596 | 0.7344 | 0.7358 |
| 0.5501 | 23.23 | 4600 | 0.5615 | 0.7342 | 0.7367 |
| 0.5545 | 24.24 | 4800 | 0.5566 | 0.7385 | 0.7405 |
| 0.5478 | 25.25 | 5000 | 0.5563 | 0.7372 | 0.7386 |
| 0.5501 | 26.26 | 5200 | 0.5585 | 0.7345 | 0.7355 |
| 0.5459 | 27.27 | 5400 | 0.5563 | 0.7350 | 0.7367 |
| 0.5483 | 28.28 | 5600 | 0.5585 | 0.7332 | 0.7339 |
| 0.5521 | 29.29 | 5800 | 0.5566 | 0.7362 | 0.7386 |
| 0.5451 | 30.3 | 6000 | 0.5552 | 0.7358 | 0.7371 |
| 0.5469 | 31.31 | 6200 | 0.5547 | 0.7378 | 0.7396 |
| 0.5478 | 32.32 | 6400 | 0.5564 | 0.7350 | 0.7380 |
| 0.5417 | 33.33 | 6600 | 0.5552 | 0.7365 | 0.7390 |
| 0.5433 | 34.34 | 6800 | 0.5562 | 0.7347 | 0.7377 |
| 0.5425 | 35.35 | 7000 | 0.5530 | 0.7407 | 0.7421 |
| 0.5473 | 36.36 | 7200 | 0.5529 | 0.7363 | 0.7380 |
| 0.5431 | 37.37 | 7400 | 0.5531 | 0.7360 | 0.7374 |
| 0.542 | 38.38 | 7600 | 0.5538 | 0.7373 | 0.7383 |
| 0.5421 | 39.39 | 7800 | 0.5536 | 0.7358 | 0.7371 |
| 0.544 | 40.4 | 8000 | 0.5536 | 0.7371 | 0.7386 |
| 0.5428 | 41.41 | 8200 | 0.5535 | 0.7369 | 0.7393 |
| 0.5474 | 42.42 | 8400 | 0.5529 | 0.7382 | 0.7405 |
| 0.5415 | 43.43 | 8600 | 0.5530 | 0.7345 | 0.7364 |
| 0.54 | 44.44 | 8800 | 0.5529 | 0.7367 | 0.7383 |
| 0.5416 | 45.45 | 9000 | 0.5526 | 0.7380 | 0.7396 |
| 0.5424 | 46.46 | 9200 | 0.5527 | 0.7364 | 0.7383 |
| 0.5451 | 47.47 | 9400 | 0.5525 | 0.7361 | 0.7380 |
| 0.5367 | 48.48 | 9600 | 0.5528 | 0.7359 | 0.7377 |
| 0.5461 | 49.49 | 9800 | 0.5524 | 0.7362 | 0.7380 |
| 0.5409 | 50.51 | 10000 | 0.5525 | 0.7369 | 0.7386 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_EMP_H3K4me1-seqsight_65536_512_47M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K4me1-seqsight_65536_512_47M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_65536_512_47M",
"region:us"
] | null | 2024-05-03T16:28:56+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us
| GUE\_EMP\_H3K4me1-seqsight\_65536\_512\_47M-L1\_f
=================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_65536\_512\_47M on the mahdibaghbanzadeh/GUE\_EMP\_H3K4me1 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5417
* F1 Score: 0.7453
* Accuracy: 0.7465
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K4me1-seqsight_65536_512_47M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_EMP_H3K4me1](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K4me1) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5342
- F1 Score: 0.7479
- Accuracy: 0.7503
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.6119 | 1.01 | 200 | 0.5885 | 0.7076 | 0.7118 |
| 0.5758 | 2.02 | 400 | 0.5650 | 0.7279 | 0.7295 |
| 0.5634 | 3.03 | 600 | 0.5565 | 0.7319 | 0.7342 |
| 0.5565 | 4.04 | 800 | 0.5557 | 0.7387 | 0.7405 |
| 0.5482 | 5.05 | 1000 | 0.5473 | 0.7424 | 0.7434 |
| 0.543 | 6.06 | 1200 | 0.5546 | 0.7293 | 0.7345 |
| 0.5323 | 7.07 | 1400 | 0.5504 | 0.7414 | 0.7440 |
| 0.531 | 8.08 | 1600 | 0.5479 | 0.7375 | 0.7402 |
| 0.5291 | 9.09 | 1800 | 0.5429 | 0.7434 | 0.7459 |
| 0.5196 | 10.1 | 2000 | 0.5444 | 0.7473 | 0.7494 |
| 0.5147 | 11.11 | 2200 | 0.5500 | 0.7482 | 0.7513 |
| 0.5142 | 12.12 | 2400 | 0.5434 | 0.7444 | 0.7462 |
| 0.5088 | 13.13 | 2600 | 0.5517 | 0.7404 | 0.7431 |
| 0.5083 | 14.14 | 2800 | 0.5411 | 0.7478 | 0.7494 |
| 0.4974 | 15.15 | 3000 | 0.5450 | 0.7427 | 0.7449 |
| 0.4994 | 16.16 | 3200 | 0.5419 | 0.7409 | 0.7440 |
| 0.4928 | 17.17 | 3400 | 0.5453 | 0.7485 | 0.7503 |
| 0.4909 | 18.18 | 3600 | 0.5473 | 0.7428 | 0.7443 |
| 0.4837 | 19.19 | 3800 | 0.5471 | 0.7458 | 0.7478 |
| 0.4813 | 20.2 | 4000 | 0.5473 | 0.7402 | 0.7421 |
| 0.4778 | 21.21 | 4200 | 0.5463 | 0.7410 | 0.7440 |
| 0.473 | 22.22 | 4400 | 0.5585 | 0.7418 | 0.7434 |
| 0.4706 | 23.23 | 4600 | 0.5573 | 0.7435 | 0.7446 |
| 0.4706 | 24.24 | 4800 | 0.5544 | 0.7398 | 0.7424 |
| 0.4626 | 25.25 | 5000 | 0.5588 | 0.7457 | 0.7465 |
| 0.463 | 26.26 | 5200 | 0.5579 | 0.7399 | 0.7408 |
| 0.4582 | 27.27 | 5400 | 0.5557 | 0.7385 | 0.7415 |
| 0.4599 | 28.28 | 5600 | 0.5634 | 0.7389 | 0.7399 |
| 0.4575 | 29.29 | 5800 | 0.5552 | 0.7401 | 0.7431 |
| 0.453 | 30.3 | 6000 | 0.5668 | 0.7389 | 0.7405 |
| 0.4525 | 31.31 | 6200 | 0.5550 | 0.7407 | 0.7421 |
| 0.4521 | 32.32 | 6400 | 0.5617 | 0.7406 | 0.7434 |
| 0.4442 | 33.33 | 6600 | 0.5689 | 0.7395 | 0.7424 |
| 0.442 | 34.34 | 6800 | 0.5624 | 0.7455 | 0.7472 |
| 0.4384 | 35.35 | 7000 | 0.5687 | 0.7431 | 0.7449 |
| 0.4474 | 36.36 | 7200 | 0.5577 | 0.7393 | 0.7412 |
| 0.4343 | 37.37 | 7400 | 0.5663 | 0.7425 | 0.7443 |
| 0.4362 | 38.38 | 7600 | 0.5668 | 0.7413 | 0.7424 |
| 0.4359 | 39.39 | 7800 | 0.5707 | 0.7376 | 0.7386 |
| 0.4341 | 40.4 | 8000 | 0.5797 | 0.7384 | 0.7396 |
| 0.4292 | 41.41 | 8200 | 0.5780 | 0.7390 | 0.7421 |
| 0.4375 | 42.42 | 8400 | 0.5716 | 0.7386 | 0.7405 |
| 0.4293 | 43.43 | 8600 | 0.5735 | 0.7406 | 0.7424 |
| 0.4273 | 44.44 | 8800 | 0.5755 | 0.7391 | 0.7412 |
| 0.4243 | 45.45 | 9000 | 0.5755 | 0.7413 | 0.7427 |
| 0.426 | 46.46 | 9200 | 0.5778 | 0.7400 | 0.7418 |
| 0.4296 | 47.47 | 9400 | 0.5708 | 0.7387 | 0.7408 |
| 0.4172 | 48.48 | 9600 | 0.5781 | 0.7385 | 0.7408 |
| 0.4309 | 49.49 | 9800 | 0.5737 | 0.7396 | 0.7418 |
| 0.4227 | 50.51 | 10000 | 0.5749 | 0.7402 | 0.7421 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_EMP_H3K4me1-seqsight_65536_512_47M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K4me1-seqsight_65536_512_47M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_65536_512_47M",
"region:us"
] | null | 2024-05-03T16:29:06+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us
| GUE\_EMP\_H3K4me1-seqsight\_65536\_512\_47M-L32\_f
==================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_65536\_512\_47M on the mahdibaghbanzadeh/GUE\_EMP\_H3K4me1 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5342
* F1 Score: 0.7479
* Accuracy: 0.7503
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K4me1-seqsight_65536_512_47M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_EMP_H3K4me1](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K4me1) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5308
- F1 Score: 0.7509
- Accuracy: 0.7532
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.6204 | 1.01 | 200 | 0.5983 | 0.6993 | 0.7027 |
| 0.5905 | 2.02 | 400 | 0.5808 | 0.7134 | 0.7165 |
| 0.5754 | 3.03 | 600 | 0.5707 | 0.7255 | 0.7279 |
| 0.5712 | 4.04 | 800 | 0.5692 | 0.7207 | 0.7235 |
| 0.5627 | 5.05 | 1000 | 0.5617 | 0.7320 | 0.7339 |
| 0.5601 | 6.06 | 1200 | 0.5642 | 0.7224 | 0.7266 |
| 0.5519 | 7.07 | 1400 | 0.5591 | 0.7345 | 0.7371 |
| 0.5513 | 8.08 | 1600 | 0.5567 | 0.7322 | 0.7352 |
| 0.5475 | 9.09 | 1800 | 0.5500 | 0.7391 | 0.7412 |
| 0.5416 | 10.1 | 2000 | 0.5502 | 0.7406 | 0.7424 |
| 0.5385 | 11.11 | 2200 | 0.5543 | 0.7401 | 0.7427 |
| 0.5387 | 12.12 | 2400 | 0.5486 | 0.7430 | 0.7443 |
| 0.5355 | 13.13 | 2600 | 0.5536 | 0.7396 | 0.7418 |
| 0.5353 | 14.14 | 2800 | 0.5485 | 0.7441 | 0.7462 |
| 0.5297 | 15.15 | 3000 | 0.5485 | 0.7435 | 0.7456 |
| 0.5287 | 16.16 | 3200 | 0.5456 | 0.7401 | 0.7431 |
| 0.5269 | 17.17 | 3400 | 0.5484 | 0.7426 | 0.7453 |
| 0.5257 | 18.18 | 3600 | 0.5428 | 0.7473 | 0.7487 |
| 0.5213 | 19.19 | 3800 | 0.5433 | 0.7402 | 0.7431 |
| 0.5215 | 20.2 | 4000 | 0.5433 | 0.7452 | 0.7472 |
| 0.5196 | 21.21 | 4200 | 0.5514 | 0.7434 | 0.7465 |
| 0.5188 | 22.22 | 4400 | 0.5465 | 0.7448 | 0.7472 |
| 0.5182 | 23.23 | 4600 | 0.5449 | 0.7456 | 0.7481 |
| 0.5198 | 24.24 | 4800 | 0.5446 | 0.7427 | 0.7456 |
| 0.514 | 25.25 | 5000 | 0.5429 | 0.7467 | 0.7481 |
| 0.5147 | 26.26 | 5200 | 0.5465 | 0.7484 | 0.7491 |
| 0.5109 | 27.27 | 5400 | 0.5419 | 0.7468 | 0.7487 |
| 0.514 | 28.28 | 5600 | 0.5464 | 0.7447 | 0.7453 |
| 0.5148 | 29.29 | 5800 | 0.5456 | 0.7457 | 0.7487 |
| 0.5083 | 30.3 | 6000 | 0.5455 | 0.7457 | 0.7472 |
| 0.509 | 31.31 | 6200 | 0.5444 | 0.7466 | 0.7481 |
| 0.5105 | 32.32 | 6400 | 0.5460 | 0.7460 | 0.7484 |
| 0.5053 | 33.33 | 6600 | 0.5483 | 0.7447 | 0.7475 |
| 0.5054 | 34.34 | 6800 | 0.5445 | 0.7463 | 0.7487 |
| 0.5027 | 35.35 | 7000 | 0.5424 | 0.7487 | 0.7503 |
| 0.5086 | 36.36 | 7200 | 0.5405 | 0.7464 | 0.7481 |
| 0.5017 | 37.37 | 7400 | 0.5435 | 0.7440 | 0.7456 |
| 0.4999 | 38.38 | 7600 | 0.5433 | 0.7486 | 0.7497 |
| 0.5025 | 39.39 | 7800 | 0.5442 | 0.7475 | 0.7484 |
| 0.5038 | 40.4 | 8000 | 0.5467 | 0.7470 | 0.7484 |
| 0.501 | 41.41 | 8200 | 0.5445 | 0.7416 | 0.7443 |
| 0.5077 | 42.42 | 8400 | 0.5422 | 0.7472 | 0.7494 |
| 0.4974 | 43.43 | 8600 | 0.5435 | 0.7464 | 0.7481 |
| 0.4987 | 44.44 | 8800 | 0.5445 | 0.7446 | 0.7462 |
| 0.4971 | 45.45 | 9000 | 0.5449 | 0.7468 | 0.7484 |
| 0.499 | 46.46 | 9200 | 0.5438 | 0.7470 | 0.7487 |
| 0.503 | 47.47 | 9400 | 0.5428 | 0.7437 | 0.7456 |
| 0.4927 | 48.48 | 9600 | 0.5450 | 0.7463 | 0.7481 |
| 0.5038 | 49.49 | 9800 | 0.5438 | 0.7460 | 0.7478 |
| 0.4975 | 50.51 | 10000 | 0.5441 | 0.7454 | 0.7472 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_EMP_H3K4me1-seqsight_65536_512_47M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K4me1-seqsight_65536_512_47M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_65536_512_47M",
"region:us"
] | null | 2024-05-03T16:29:21+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us
| GUE\_EMP\_H3K4me1-seqsight\_65536\_512\_47M-L8\_f
=================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_65536\_512\_47M on the mahdibaghbanzadeh/GUE\_EMP\_H3K4me1 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5308
* F1 Score: 0.7509
* Accuracy: 0.7532
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K36me3-seqsight_65536_512_47M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_EMP_H3K36me3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K36me3) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5000
- F1 Score: 0.7751
- Accuracy: 0.7769
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5805 | 0.92 | 200 | 0.5475 | 0.7329 | 0.7351 |
| 0.533 | 1.83 | 400 | 0.5360 | 0.7384 | 0.7411 |
| 0.5243 | 2.75 | 600 | 0.5261 | 0.7465 | 0.7483 |
| 0.5216 | 3.67 | 800 | 0.5199 | 0.7555 | 0.7563 |
| 0.5103 | 4.59 | 1000 | 0.5198 | 0.7563 | 0.7583 |
| 0.5074 | 5.5 | 1200 | 0.5137 | 0.7593 | 0.7615 |
| 0.5047 | 6.42 | 1400 | 0.5086 | 0.7731 | 0.7738 |
| 0.5017 | 7.34 | 1600 | 0.5109 | 0.7695 | 0.7712 |
| 0.4951 | 8.26 | 1800 | 0.5114 | 0.7696 | 0.7718 |
| 0.499 | 9.17 | 2000 | 0.5101 | 0.7674 | 0.7701 |
| 0.4968 | 10.09 | 2200 | 0.5107 | 0.7670 | 0.7704 |
| 0.4928 | 11.01 | 2400 | 0.5085 | 0.7655 | 0.7689 |
| 0.4914 | 11.93 | 2600 | 0.5024 | 0.7741 | 0.7764 |
| 0.4898 | 12.84 | 2800 | 0.5021 | 0.7707 | 0.7732 |
| 0.4886 | 13.76 | 3000 | 0.5087 | 0.7676 | 0.7709 |
| 0.4853 | 14.68 | 3200 | 0.4988 | 0.7759 | 0.7775 |
| 0.489 | 15.6 | 3400 | 0.5080 | 0.7675 | 0.7712 |
| 0.4866 | 16.51 | 3600 | 0.5003 | 0.7750 | 0.7769 |
| 0.4851 | 17.43 | 3800 | 0.4924 | 0.7816 | 0.7830 |
| 0.4856 | 18.35 | 4000 | 0.4995 | 0.7763 | 0.7787 |
| 0.4816 | 19.27 | 4200 | 0.4990 | 0.7754 | 0.7775 |
| 0.4845 | 20.18 | 4400 | 0.5034 | 0.7717 | 0.7749 |
| 0.4832 | 21.1 | 4600 | 0.4975 | 0.7765 | 0.7787 |
| 0.4828 | 22.02 | 4800 | 0.5014 | 0.7756 | 0.7778 |
| 0.4829 | 22.94 | 5000 | 0.4969 | 0.7744 | 0.7769 |
| 0.4803 | 23.85 | 5200 | 0.4996 | 0.7732 | 0.7761 |
| 0.4788 | 24.77 | 5400 | 0.5065 | 0.7725 | 0.7758 |
| 0.4817 | 25.69 | 5600 | 0.5004 | 0.7760 | 0.7784 |
| 0.4796 | 26.61 | 5800 | 0.4973 | 0.7755 | 0.7778 |
| 0.4758 | 27.52 | 6000 | 0.5100 | 0.7729 | 0.7764 |
| 0.4787 | 28.44 | 6200 | 0.5018 | 0.7717 | 0.7747 |
| 0.4762 | 29.36 | 6400 | 0.5042 | 0.7713 | 0.7747 |
| 0.4794 | 30.28 | 6600 | 0.5040 | 0.7725 | 0.7758 |
| 0.4762 | 31.19 | 6800 | 0.4930 | 0.7812 | 0.7827 |
| 0.476 | 32.11 | 7000 | 0.4992 | 0.7733 | 0.7764 |
| 0.4767 | 33.03 | 7200 | 0.5005 | 0.7742 | 0.7769 |
| 0.4753 | 33.94 | 7400 | 0.5002 | 0.7756 | 0.7781 |
| 0.4756 | 34.86 | 7600 | 0.4983 | 0.7750 | 0.7778 |
| 0.4743 | 35.78 | 7800 | 0.4978 | 0.7738 | 0.7767 |
| 0.476 | 36.7 | 8000 | 0.4983 | 0.7744 | 0.7772 |
| 0.4736 | 37.61 | 8200 | 0.5032 | 0.7712 | 0.7747 |
| 0.4758 | 38.53 | 8400 | 0.4928 | 0.7799 | 0.7818 |
| 0.4734 | 39.45 | 8600 | 0.4986 | 0.7745 | 0.7772 |
| 0.4725 | 40.37 | 8800 | 0.5023 | 0.7729 | 0.7761 |
| 0.4773 | 41.28 | 9000 | 0.4986 | 0.7734 | 0.7764 |
| 0.4743 | 42.2 | 9200 | 0.4955 | 0.7774 | 0.7798 |
| 0.4721 | 43.12 | 9400 | 0.4984 | 0.7755 | 0.7781 |
| 0.4744 | 44.04 | 9600 | 0.4979 | 0.7750 | 0.7778 |
| 0.4732 | 44.95 | 9800 | 0.5005 | 0.7721 | 0.7752 |
| 0.4742 | 45.87 | 10000 | 0.4987 | 0.7755 | 0.7784 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_EMP_H3K36me3-seqsight_65536_512_47M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K36me3-seqsight_65536_512_47M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_65536_512_47M",
"region:us"
] | null | 2024-05-03T16:29:45+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us
| GUE\_EMP\_H3K36me3-seqsight\_65536\_512\_47M-L1\_f
==================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_65536\_512\_47M on the mahdibaghbanzadeh/GUE\_EMP\_H3K36me3 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5000
* F1 Score: 0.7751
* Accuracy: 0.7769
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text-generation | transformers | # dareties
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using ./models2/Meta-Llama-3-8B-instruct as a base.
### Models Merged
The following models were included in the merge:
* ./models2/Llama-3-Kafka-8B-v0.1
* [NousResearch/Hermes-2-Pro-Llama-3-8B](https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B)
* ./models2/Llama3_DiscoLM_German_8b_v0.1_experimental
* ./models2/Llama-3-SauerkrautLM-8b-Instruct
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: ./models2/Meta-Llama-3-8B-instruct
# No parameters necessary for base model
- model: ./models2/Llama-3-SauerkrautLM-8b-Instruct
parameters:
density: 0.6
weight: 0.25
- model: ./models2/Llama3_DiscoLM_German_8b_v0.1_experimental
parameters:
density: 0.6
weight: 0.25
- model: ./models2/Llama-3-Kafka-8B-v0.1
parameters:
density: 0.6
weight: 0.25
- model: NousResearch/Hermes-2-Pro-Llama-3-8B
parameters:
density: 0.6
weight: 0.25
merge_method: dare_ties
base_model: ./models2/Meta-Llama-3-8B-instruct
tokenizer_source: model:NousResearch/Hermes-2-Pro-Llama-3-8B
parameters:
int8_mask: true
dtype: bfloat16
```
| {"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["NousResearch/Hermes-2-Pro-Llama-3-8B"]} | johannhartmann/llama8_dt_b | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2311.03099",
"arxiv:2306.01708",
"base_model:NousResearch/Hermes-2-Pro-Llama-3-8B",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-03T16:30:05+00:00 | [
"2311.03099",
"2306.01708"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #mergekit #merge #conversational #arxiv-2311.03099 #arxiv-2306.01708 #base_model-NousResearch/Hermes-2-Pro-Llama-3-8B #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| # dareties
This is a merge of pre-trained language models created using mergekit.
## Merge Details
### Merge Method
This model was merged using the DARE TIES merge method using ./models2/Meta-Llama-3-8B-instruct as a base.
### Models Merged
The following models were included in the merge:
* ./models2/Llama-3-Kafka-8B-v0.1
* NousResearch/Hermes-2-Pro-Llama-3-8B
* ./models2/Llama3_DiscoLM_German_8b_v0.1_experimental
* ./models2/Llama-3-SauerkrautLM-8b-Instruct
### Configuration
The following YAML configuration was used to produce this model:
| [
"# dareties\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the DARE TIES merge method using ./models2/Meta-Llama-3-8B-instruct as a base.",
"### Models Merged\n\nThe following models were included in the merge:\n* ./models2/Llama-3-Kafka-8B-v0.1\n* NousResearch/Hermes-2-Pro-Llama-3-8B\n* ./models2/Llama3_DiscoLM_German_8b_v0.1_experimental\n* ./models2/Llama-3-SauerkrautLM-8b-Instruct",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #mergekit #merge #conversational #arxiv-2311.03099 #arxiv-2306.01708 #base_model-NousResearch/Hermes-2-Pro-Llama-3-8B #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# dareties\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the DARE TIES merge method using ./models2/Meta-Llama-3-8B-instruct as a base.",
"### Models Merged\n\nThe following models were included in the merge:\n* ./models2/Llama-3-Kafka-8B-v0.1\n* NousResearch/Hermes-2-Pro-Llama-3-8B\n* ./models2/Llama3_DiscoLM_German_8b_v0.1_experimental\n* ./models2/Llama-3-SauerkrautLM-8b-Instruct",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
null | transformers |
# Uploaded model
- **Developed by:** johannoriel
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-bnb-4bit"} | johannoriel/medllama_model | null | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T16:30:46+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: johannoriel
- License: apache-2.0
- Finetuned from model : unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: johannoriel\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: johannoriel\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
null | null | This model is finetuned with an ML Q&A dataset - hanyueshf/ml-arxiv-papers-qa. It outperforms both its base Llama-2-7B-Chat and Llama-3-8B-Instruct, as shown in below figure.
Note: improvement = (finetuned_llama2_scores - base_llama2_scores) / base_llama2_scores.
The finetuning code is available on Github at https://github.com/hanyuesgithub/QA-ml-arxiv-papers.
![image/png](https://cdn-uploads.huggingface.co/production/uploads/65ab7449356bf23b4ac0f556/vx-VNfMq3q4iNRqmFz1iw.png)
| {} | hanyueshf/llama-2-7b-chat-ml-qa | null | [
"region:us"
] | null | 2024-05-03T16:31:05+00:00 | [] | [] | TAGS
#region-us
| This model is finetuned with an ML Q&A dataset - hanyueshf/ml-arxiv-papers-qa. It outperforms both its base Llama-2-7B-Chat and Llama-3-8B-Instruct, as shown in below figure.
Note: improvement = (finetuned_llama2_scores - base_llama2_scores) / base_llama2_scores.
The finetuning code is available on Github at URL
!image/png
| [] | [
"TAGS\n#region-us \n"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine_tuned_boolq_bert
This model is a fine-tuned version of [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5736
- Accuracy: 0.7222
- F1: 0.7325
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 400
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-------:|:----:|:---------------:|:--------:|:------:|
| 0.6443 | 4.1667 | 50 | 0.5606 | 0.7778 | 0.6806 |
| 0.3932 | 8.3333 | 100 | 0.6016 | 0.6111 | 0.6255 |
| 0.126 | 12.5 | 150 | 1.0887 | 0.5 | 0.5418 |
| 0.0166 | 16.6667 | 200 | 1.5543 | 0.5556 | 0.5829 |
| 0.0041 | 20.8333 | 250 | 1.5032 | 0.7222 | 0.7325 |
| 0.0022 | 25.0 | 300 | 1.7354 | 0.6667 | 0.6872 |
| 0.0018 | 29.1667 | 350 | 1.5756 | 0.6667 | 0.6667 |
| 0.0016 | 33.3333 | 400 | 1.5736 | 0.7222 | 0.7325 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.3.0
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "base_model": "google-bert/bert-base-uncased", "model-index": [{"name": "fine_tuned_boolq_bert", "results": []}]} | lenatr99/fine_tuned_boolq_bert | null | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T16:31:44+00:00 | [] | [] | TAGS
#transformers #safetensors #bert #text-classification #generated_from_trainer #base_model-google-bert/bert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| fine\_tuned\_boolq\_bert
========================
This model is a fine-tuned version of google-bert/bert-base-uncased on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 1.5736
* Accuracy: 0.7222
* F1: 0.7325
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 400
### Training results
### Framework versions
* Transformers 4.40.1
* Pytorch 2.3.0
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 400",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.3.0\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #safetensors #bert #text-classification #generated_from_trainer #base_model-google-bert/bert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 400",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.3.0\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
reinforcement-learning | null |
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="TeoGal/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
| {"tags": ["FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation"], "model-index": [{"name": "q-FrozenLake-v1-4x4-noSlippery", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "FrozenLake-v1-4x4-no_slippery", "type": "FrozenLake-v1-4x4-no_slippery"}, "metrics": [{"type": "mean_reward", "value": "1.00 +/- 0.00", "name": "mean_reward", "verified": false}]}]}]} | TeoGal/q-FrozenLake-v1-4x4-noSlippery | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | null | 2024-05-03T16:32:44+00:00 | [] | [] | TAGS
#FrozenLake-v1-4x4-no_slippery #q-learning #reinforcement-learning #custom-implementation #model-index #region-us
|
# Q-Learning Agent playing1 FrozenLake-v1
This is a trained model of a Q-Learning agent playing FrozenLake-v1 .
## Usage
| [
"# Q-Learning Agent playing1 FrozenLake-v1\n This is a trained model of a Q-Learning agent playing FrozenLake-v1 .\n\n ## Usage"
] | [
"TAGS\n#FrozenLake-v1-4x4-no_slippery #q-learning #reinforcement-learning #custom-implementation #model-index #region-us \n",
"# Q-Learning Agent playing1 FrozenLake-v1\n This is a trained model of a Q-Learning agent playing FrozenLake-v1 .\n\n ## Usage"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | golf2248/mdxmtky | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-03T16:32:49+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | transformers |
# LLOROv2 - GGUF VERSION
Este repositório contém o modelo Llorov2 de 7B de parâmetros em formato GGUF, na versão 16 bits e também nas versão quantizada de 8 bits.
Lloro, desenvolvido pelos Laboratórios de Pesquisa Semantix, é um Modelo de Linguagem que foi treinado para realizar efetivamente Análise de Dados em Português no Python. É uma versão aprimorada de codellama/CodeLlama-7b-Instruct-hf, que foi treinado em conjuntos de dados sintéticos. O processo de aprimoramento foi realizado usando a metodologia QLORA em uma GPU V100 com 16 GB de RAM.
Acesse o [site](https://semantix.ai/conheca-o-lloro-o-primeiro-modelo-de-ia-expert-em-analise-de-dados-100-brasileiro/) para mais informações sobre o Lloro.
# Sobre o formato GGUF
O modelo no formato GGUF permite seu uso para inferência usando o llama.cpp, permitindo tanto o uso de CPU como de GPU, e outras bibliotecas e ferramentas compatíveis, como:
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
* [KoboldCpp](https://github.com/LostRuins/koboldcpp)
* [LM Studio](https://lmstudio.ai/)
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui)
* [ctransformers](https://github.com/marella/ctransformers)
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python)
## Detalhes do Modelo
- **Modelo Base:** CodeLlama-7b-Instruct-hf
- **Dataset de Treinamento:** Synthetic
- **Idioma:** Português
## Contribuições
Contribuições para a melhoria deste modelo são bem-vindas. Sinta-se à vontade para abrir problemas e solicitações pull. | {"language": ["pt"], "license": "llama2", "library_name": "transformers", "tags": ["LLM", "Portuguese", "Lloro", "Llama 2", "Q&A"], "datasets": ["semantixai/Test-Dataset-Lloro"], "base_model": "codellama/CodeLlama-7b-Instruct-hf"} | anaxsouza/llorov2-gguf | null | [
"transformers",
"gguf",
"LLM",
"Portuguese",
"Lloro",
"Llama 2",
"Q&A",
"pt",
"dataset:semantixai/Test-Dataset-Lloro",
"base_model:codellama/CodeLlama-7b-Instruct-hf",
"license:llama2",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T16:33:25+00:00 | [] | [
"pt"
] | TAGS
#transformers #gguf #LLM #Portuguese #Lloro #Llama 2 #Q&A #pt #dataset-semantixai/Test-Dataset-Lloro #base_model-codellama/CodeLlama-7b-Instruct-hf #license-llama2 #endpoints_compatible #region-us
|
# LLOROv2 - GGUF VERSION
Este repositório contém o modelo Llorov2 de 7B de parâmetros em formato GGUF, na versão 16 bits e também nas versão quantizada de 8 bits.
Lloro, desenvolvido pelos Laboratórios de Pesquisa Semantix, é um Modelo de Linguagem que foi treinado para realizar efetivamente Análise de Dados em Português no Python. É uma versão aprimorada de codellama/CodeLlama-7b-Instruct-hf, que foi treinado em conjuntos de dados sintéticos. O processo de aprimoramento foi realizado usando a metodologia QLORA em uma GPU V100 com 16 GB de RAM.
Acesse o site para mais informações sobre o Lloro.
# Sobre o formato GGUF
O modelo no formato GGUF permite seu uso para inferência usando o URL, permitindo tanto o uso de CPU como de GPU, e outras bibliotecas e ferramentas compatíveis, como:
* text-generation-webui
* KoboldCpp
* LM Studio
* LoLLMS Web UI
* ctransformers
* llama-cpp-python
## Detalhes do Modelo
- Modelo Base: CodeLlama-7b-Instruct-hf
- Dataset de Treinamento: Synthetic
- Idioma: Português
## Contribuições
Contribuições para a melhoria deste modelo são bem-vindas. Sinta-se à vontade para abrir problemas e solicitações pull. | [
"# LLOROv2 - GGUF VERSION\n\nEste repositório contém o modelo Llorov2 de 7B de parâmetros em formato GGUF, na versão 16 bits e também nas versão quantizada de 8 bits.\n\nLloro, desenvolvido pelos Laboratórios de Pesquisa Semantix, é um Modelo de Linguagem que foi treinado para realizar efetivamente Análise de Dados em Português no Python. É uma versão aprimorada de codellama/CodeLlama-7b-Instruct-hf, que foi treinado em conjuntos de dados sintéticos. O processo de aprimoramento foi realizado usando a metodologia QLORA em uma GPU V100 com 16 GB de RAM.\n\nAcesse o site para mais informações sobre o Lloro.",
"# Sobre o formato GGUF\n\nO modelo no formato GGUF permite seu uso para inferência usando o URL, permitindo tanto o uso de CPU como de GPU, e outras bibliotecas e ferramentas compatíveis, como:\n* text-generation-webui\n* KoboldCpp\n* LM Studio\n* LoLLMS Web UI\n* ctransformers\n* llama-cpp-python",
"## Detalhes do Modelo\n\n- Modelo Base: CodeLlama-7b-Instruct-hf\n- Dataset de Treinamento: Synthetic\n- Idioma: Português",
"## Contribuições\n\nContribuições para a melhoria deste modelo são bem-vindas. Sinta-se à vontade para abrir problemas e solicitações pull."
] | [
"TAGS\n#transformers #gguf #LLM #Portuguese #Lloro #Llama 2 #Q&A #pt #dataset-semantixai/Test-Dataset-Lloro #base_model-codellama/CodeLlama-7b-Instruct-hf #license-llama2 #endpoints_compatible #region-us \n",
"# LLOROv2 - GGUF VERSION\n\nEste repositório contém o modelo Llorov2 de 7B de parâmetros em formato GGUF, na versão 16 bits e também nas versão quantizada de 8 bits.\n\nLloro, desenvolvido pelos Laboratórios de Pesquisa Semantix, é um Modelo de Linguagem que foi treinado para realizar efetivamente Análise de Dados em Português no Python. É uma versão aprimorada de codellama/CodeLlama-7b-Instruct-hf, que foi treinado em conjuntos de dados sintéticos. O processo de aprimoramento foi realizado usando a metodologia QLORA em uma GPU V100 com 16 GB de RAM.\n\nAcesse o site para mais informações sobre o Lloro.",
"# Sobre o formato GGUF\n\nO modelo no formato GGUF permite seu uso para inferência usando o URL, permitindo tanto o uso de CPU como de GPU, e outras bibliotecas e ferramentas compatíveis, como:\n* text-generation-webui\n* KoboldCpp\n* LM Studio\n* LoLLMS Web UI\n* ctransformers\n* llama-cpp-python",
"## Detalhes do Modelo\n\n- Modelo Base: CodeLlama-7b-Instruct-hf\n- Dataset de Treinamento: Synthetic\n- Idioma: Português",
"## Contribuições\n\nContribuições para a melhoria deste modelo são bem-vindas. Sinta-se à vontade para abrir problemas e solicitações pull."
] |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sft-fsi
This model is a fine-tuned version of [dynamofl/dynamo-1.6B-v0.4-mosaic-dynamoDPO-iter0-2978](https://huggingface.co/dynamofl/dynamo-1.6B-v0.4-mosaic-dynamoDPO-iter0-2978) on the dynamofl/train-default-FSI-PersonalFinancialAdvice-input-formatted-chatml dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7006
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-------:|:----:|:---------------:|
| 5.8867 | 0.5333 | 1 | 5.7356 |
| 5.8867 | 1.6 | 3 | 2.7050 |
| 3.7066 | 2.6667 | 5 | 1.9496 |
| 3.7066 | 3.7333 | 7 | 1.5918 |
| 3.7066 | 4.8 | 9 | 1.3194 |
| 1.6243 | 5.8667 | 11 | 1.0055 |
| 1.6243 | 6.9333 | 13 | 0.8459 |
| 0.9667 | 8.0 | 15 | 0.7559 |
| 0.9667 | 8.5333 | 16 | 0.7331 |
| 0.9667 | 9.6 | 18 | 0.7034 |
| 0.7508 | 10.6667 | 20 | 0.7006 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"tags": ["alignment-handbook", "trl", "sft", "generated_from_trainer", "trl", "sft", "generated_from_trainer"], "datasets": ["dynamofl/train-default-FSI-PersonalFinancialAdvice-input-formatted-chatml"], "base_model": "dynamofl/dynamo-1.6B-v0.4-mosaic-dynamoDPO-iter0-2978", "model-index": [{"name": "sft-fsi", "results": []}]} | jamesoneill12/sft-fsi | null | [
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"alignment-handbook",
"trl",
"sft",
"generated_from_trainer",
"conversational",
"dataset:dynamofl/train-default-FSI-PersonalFinancialAdvice-input-formatted-chatml",
"base_model:dynamofl/dynamo-1.6B-v0.4-mosaic-dynamoDPO-iter0-2978",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-03T16:34:49+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #llama #text-generation #alignment-handbook #trl #sft #generated_from_trainer #conversational #dataset-dynamofl/train-default-FSI-PersonalFinancialAdvice-input-formatted-chatml #base_model-dynamofl/dynamo-1.6B-v0.4-mosaic-dynamoDPO-iter0-2978 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| sft-fsi
=======
This model is a fine-tuned version of dynamofl/dynamo-1.6B-v0.4-mosaic-dynamoDPO-iter0-2978 on the dynamofl/train-default-FSI-PersonalFinancialAdvice-input-formatted-chatml dataset.
It achieves the following results on the evaluation set:
* Loss: 0.7006
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 4
* eval\_batch\_size: 4
* seed: 42
* distributed\_type: multi-GPU
* num\_devices: 8
* gradient\_accumulation\_steps: 8
* total\_train\_batch\_size: 256
* total\_eval\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: cosine
* lr\_scheduler\_warmup\_ratio: 0.1
* num\_epochs: 20
### Training results
### Framework versions
* Transformers 4.40.1
* Pytorch 2.3.0+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 8\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 256\n* total\\_eval\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 20",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.3.0+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #llama #text-generation #alignment-handbook #trl #sft #generated_from_trainer #conversational #dataset-dynamofl/train-default-FSI-PersonalFinancialAdvice-input-formatted-chatml #base_model-dynamofl/dynamo-1.6B-v0.4-mosaic-dynamoDPO-iter0-2978 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 8\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 256\n* total\\_eval\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 20",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.3.0+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K36me3-seqsight_65536_512_47M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_EMP_H3K36me3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K36me3) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4908
- F1 Score: 0.7777
- Accuracy: 0.7790
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5667 | 0.92 | 200 | 0.5382 | 0.7367 | 0.7397 |
| 0.5207 | 1.83 | 400 | 0.5229 | 0.7512 | 0.7537 |
| 0.5071 | 2.75 | 600 | 0.5121 | 0.7597 | 0.7618 |
| 0.505 | 3.67 | 800 | 0.5060 | 0.7706 | 0.7724 |
| 0.4938 | 4.59 | 1000 | 0.5047 | 0.7715 | 0.7735 |
| 0.4908 | 5.5 | 1200 | 0.5033 | 0.7695 | 0.7724 |
| 0.4881 | 6.42 | 1400 | 0.4938 | 0.7790 | 0.7801 |
| 0.4832 | 7.34 | 1600 | 0.5067 | 0.7749 | 0.7775 |
| 0.4775 | 8.26 | 1800 | 0.4963 | 0.7822 | 0.7838 |
| 0.4815 | 9.17 | 2000 | 0.4922 | 0.7798 | 0.7815 |
| 0.478 | 10.09 | 2200 | 0.5153 | 0.7620 | 0.7675 |
| 0.4733 | 11.01 | 2400 | 0.4963 | 0.7765 | 0.7801 |
| 0.4707 | 11.93 | 2600 | 0.4824 | 0.7864 | 0.7873 |
| 0.4677 | 12.84 | 2800 | 0.4894 | 0.7764 | 0.7792 |
| 0.4668 | 13.76 | 3000 | 0.5010 | 0.7719 | 0.7761 |
| 0.4613 | 14.68 | 3200 | 0.4897 | 0.7811 | 0.7833 |
| 0.4644 | 15.6 | 3400 | 0.4857 | 0.7769 | 0.7795 |
| 0.463 | 16.51 | 3600 | 0.4989 | 0.7776 | 0.7807 |
| 0.4594 | 17.43 | 3800 | 0.4825 | 0.7863 | 0.7878 |
| 0.4594 | 18.35 | 4000 | 0.4870 | 0.7809 | 0.7833 |
| 0.4559 | 19.27 | 4200 | 0.4896 | 0.7829 | 0.7850 |
| 0.4579 | 20.18 | 4400 | 0.4996 | 0.7734 | 0.7772 |
| 0.4552 | 21.1 | 4600 | 0.4861 | 0.7824 | 0.7847 |
| 0.4564 | 22.02 | 4800 | 0.4899 | 0.7840 | 0.7861 |
| 0.4525 | 22.94 | 5000 | 0.4892 | 0.7759 | 0.7792 |
| 0.4504 | 23.85 | 5200 | 0.4890 | 0.7818 | 0.7847 |
| 0.4467 | 24.77 | 5400 | 0.5002 | 0.7733 | 0.7775 |
| 0.4512 | 25.69 | 5600 | 0.4926 | 0.7807 | 0.7835 |
| 0.4492 | 26.61 | 5800 | 0.4851 | 0.7833 | 0.7856 |
| 0.4436 | 27.52 | 6000 | 0.5050 | 0.7786 | 0.7821 |
| 0.4465 | 28.44 | 6200 | 0.4897 | 0.7824 | 0.7853 |
| 0.4451 | 29.36 | 6400 | 0.4890 | 0.7758 | 0.7792 |
| 0.4446 | 30.28 | 6600 | 0.4969 | 0.7771 | 0.7810 |
| 0.4429 | 31.19 | 6800 | 0.4843 | 0.7854 | 0.7876 |
| 0.441 | 32.11 | 7000 | 0.4919 | 0.7806 | 0.7838 |
| 0.4424 | 33.03 | 7200 | 0.4934 | 0.7819 | 0.7850 |
| 0.4413 | 33.94 | 7400 | 0.4864 | 0.7825 | 0.7850 |
| 0.4409 | 34.86 | 7600 | 0.4901 | 0.7825 | 0.7853 |
| 0.4398 | 35.78 | 7800 | 0.4866 | 0.7823 | 0.7847 |
| 0.4412 | 36.7 | 8000 | 0.4897 | 0.7805 | 0.7835 |
| 0.4369 | 37.61 | 8200 | 0.4985 | 0.7776 | 0.7815 |
| 0.4408 | 38.53 | 8400 | 0.4874 | 0.7825 | 0.7853 |
| 0.4359 | 39.45 | 8600 | 0.4935 | 0.7800 | 0.7833 |
| 0.4366 | 40.37 | 8800 | 0.4989 | 0.7804 | 0.7838 |
| 0.4396 | 41.28 | 9000 | 0.4934 | 0.7810 | 0.7844 |
| 0.4359 | 42.2 | 9200 | 0.4899 | 0.7829 | 0.7858 |
| 0.4332 | 43.12 | 9400 | 0.4930 | 0.7831 | 0.7861 |
| 0.4371 | 44.04 | 9600 | 0.4909 | 0.7831 | 0.7861 |
| 0.4348 | 44.95 | 9800 | 0.4961 | 0.7813 | 0.7847 |
| 0.4362 | 45.87 | 10000 | 0.4933 | 0.7830 | 0.7861 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_EMP_H3K36me3-seqsight_65536_512_47M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K36me3-seqsight_65536_512_47M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_65536_512_47M",
"region:us"
] | null | 2024-05-03T16:35:35+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us
| GUE\_EMP\_H3K36me3-seqsight\_65536\_512\_47M-L8\_f
==================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_65536\_512\_47M on the mahdibaghbanzadeh/GUE\_EMP\_H3K36me3 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4908
* F1 Score: 0.7777
* Accuracy: 0.7790
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_mouse_0-seqsight_65536_512_47M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_mouse_0](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_0) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5918
- F1 Score: 0.6924
- Accuracy: 0.6926
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.6506 | 3.92 | 200 | 0.6065 | 0.6652 | 0.6691 |
| 0.6137 | 7.84 | 400 | 0.5919 | 0.6697 | 0.6716 |
| 0.6024 | 11.76 | 600 | 0.5871 | 0.6856 | 0.6864 |
| 0.5927 | 15.69 | 800 | 0.5826 | 0.6963 | 0.6963 |
| 0.5819 | 19.61 | 1000 | 0.5840 | 0.6903 | 0.6914 |
| 0.5778 | 23.53 | 1200 | 0.5791 | 0.7016 | 0.7025 |
| 0.5674 | 27.45 | 1400 | 0.5823 | 0.6957 | 0.6963 |
| 0.5655 | 31.37 | 1600 | 0.5750 | 0.7000 | 0.7 |
| 0.5593 | 35.29 | 1800 | 0.5737 | 0.7037 | 0.7037 |
| 0.5532 | 39.22 | 2000 | 0.5797 | 0.6943 | 0.6951 |
| 0.5519 | 43.14 | 2200 | 0.5756 | 0.6985 | 0.6988 |
| 0.5508 | 47.06 | 2400 | 0.5696 | 0.7145 | 0.7148 |
| 0.5412 | 50.98 | 2600 | 0.5818 | 0.6816 | 0.6827 |
| 0.5392 | 54.9 | 2800 | 0.5716 | 0.7124 | 0.7136 |
| 0.5385 | 58.82 | 3000 | 0.5700 | 0.7047 | 0.7049 |
| 0.5367 | 62.75 | 3200 | 0.5681 | 0.7013 | 0.7012 |
| 0.5329 | 66.67 | 3400 | 0.5713 | 0.6976 | 0.6975 |
| 0.5304 | 70.59 | 3600 | 0.5742 | 0.7010 | 0.7012 |
| 0.5282 | 74.51 | 3800 | 0.5724 | 0.6975 | 0.6975 |
| 0.5279 | 78.43 | 4000 | 0.5690 | 0.6988 | 0.6988 |
| 0.5261 | 82.35 | 4200 | 0.5696 | 0.6998 | 0.7 |
| 0.5231 | 86.27 | 4400 | 0.5752 | 0.6985 | 0.6988 |
| 0.5214 | 90.2 | 4600 | 0.5722 | 0.7013 | 0.7012 |
| 0.5186 | 94.12 | 4800 | 0.5787 | 0.6983 | 0.6988 |
| 0.5183 | 98.04 | 5000 | 0.5680 | 0.7085 | 0.7086 |
| 0.5154 | 101.96 | 5200 | 0.5731 | 0.7036 | 0.7037 |
| 0.514 | 105.88 | 5400 | 0.5663 | 0.7121 | 0.7123 |
| 0.5163 | 109.8 | 5600 | 0.5668 | 0.7148 | 0.7148 |
| 0.5144 | 113.73 | 5800 | 0.5673 | 0.7184 | 0.7185 |
| 0.5153 | 117.65 | 6000 | 0.5702 | 0.7037 | 0.7037 |
| 0.5117 | 121.57 | 6200 | 0.5666 | 0.7161 | 0.7160 |
| 0.5099 | 125.49 | 6400 | 0.5759 | 0.7047 | 0.7049 |
| 0.5131 | 129.41 | 6600 | 0.5691 | 0.7112 | 0.7111 |
| 0.5109 | 133.33 | 6800 | 0.5681 | 0.7124 | 0.7123 |
| 0.5097 | 137.25 | 7000 | 0.5711 | 0.7086 | 0.7086 |
| 0.5056 | 141.18 | 7200 | 0.5727 | 0.7112 | 0.7111 |
| 0.5074 | 145.1 | 7400 | 0.5751 | 0.7060 | 0.7062 |
| 0.5065 | 149.02 | 7600 | 0.5696 | 0.7136 | 0.7136 |
| 0.5063 | 152.94 | 7800 | 0.5720 | 0.7099 | 0.7099 |
| 0.5041 | 156.86 | 8000 | 0.5691 | 0.7112 | 0.7111 |
| 0.5059 | 160.78 | 8200 | 0.5727 | 0.7099 | 0.7099 |
| 0.5064 | 164.71 | 8400 | 0.5724 | 0.7111 | 0.7111 |
| 0.5049 | 168.63 | 8600 | 0.5717 | 0.7099 | 0.7099 |
| 0.5042 | 172.55 | 8800 | 0.5697 | 0.7149 | 0.7148 |
| 0.5008 | 176.47 | 9000 | 0.5716 | 0.7149 | 0.7148 |
| 0.4982 | 180.39 | 9200 | 0.5729 | 0.7136 | 0.7136 |
| 0.4989 | 184.31 | 9400 | 0.5742 | 0.7099 | 0.7099 |
| 0.4989 | 188.24 | 9600 | 0.5733 | 0.7111 | 0.7111 |
| 0.504 | 192.16 | 9800 | 0.5722 | 0.7124 | 0.7123 |
| 0.4969 | 196.08 | 10000 | 0.5727 | 0.7111 | 0.7111 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_mouse_0-seqsight_65536_512_47M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_mouse_0-seqsight_65536_512_47M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_65536_512_47M",
"region:us"
] | null | 2024-05-03T16:35:35+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us
| GUE\_mouse\_0-seqsight\_65536\_512\_47M-L1\_f
=============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_65536\_512\_47M on the mahdibaghbanzadeh/GUE\_mouse\_0 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5918
* F1 Score: 0.6924
* Accuracy: 0.6926
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K36me3-seqsight_65536_512_47M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_EMP_H3K36me3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K36me3) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5059
- F1 Score: 0.7826
- Accuracy: 0.7850
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5563 | 0.92 | 200 | 0.5396 | 0.7412 | 0.7451 |
| 0.5106 | 1.83 | 400 | 0.5126 | 0.7627 | 0.7655 |
| 0.4971 | 2.75 | 600 | 0.5045 | 0.7687 | 0.7709 |
| 0.4962 | 3.67 | 800 | 0.4940 | 0.7739 | 0.7752 |
| 0.4826 | 4.59 | 1000 | 0.4927 | 0.7786 | 0.7801 |
| 0.4764 | 5.5 | 1200 | 0.4988 | 0.7700 | 0.7738 |
| 0.4737 | 6.42 | 1400 | 0.4823 | 0.7788 | 0.7798 |
| 0.4665 | 7.34 | 1600 | 0.4999 | 0.7720 | 0.7752 |
| 0.4582 | 8.26 | 1800 | 0.4894 | 0.7817 | 0.7830 |
| 0.4631 | 9.17 | 2000 | 0.4824 | 0.7819 | 0.7833 |
| 0.4574 | 10.09 | 2200 | 0.5118 | 0.7611 | 0.7663 |
| 0.4517 | 11.01 | 2400 | 0.4879 | 0.7811 | 0.7841 |
| 0.4455 | 11.93 | 2600 | 0.4735 | 0.7886 | 0.7893 |
| 0.4428 | 12.84 | 2800 | 0.4840 | 0.7766 | 0.7798 |
| 0.4382 | 13.76 | 3000 | 0.5024 | 0.7712 | 0.7761 |
| 0.4323 | 14.68 | 3200 | 0.4920 | 0.7870 | 0.7893 |
| 0.4336 | 15.6 | 3400 | 0.4818 | 0.7847 | 0.7876 |
| 0.4307 | 16.51 | 3600 | 0.4985 | 0.7775 | 0.7807 |
| 0.4247 | 17.43 | 3800 | 0.4830 | 0.7893 | 0.7910 |
| 0.4217 | 18.35 | 4000 | 0.4860 | 0.7895 | 0.7913 |
| 0.4195 | 19.27 | 4200 | 0.5007 | 0.7898 | 0.7919 |
| 0.4169 | 20.18 | 4400 | 0.5000 | 0.7856 | 0.7884 |
| 0.4126 | 21.1 | 4600 | 0.4923 | 0.7923 | 0.7939 |
| 0.4123 | 22.02 | 4800 | 0.4984 | 0.7824 | 0.7850 |
| 0.4059 | 22.94 | 5000 | 0.4944 | 0.7830 | 0.7861 |
| 0.4024 | 23.85 | 5200 | 0.4952 | 0.7799 | 0.7830 |
| 0.399 | 24.77 | 5400 | 0.5144 | 0.7755 | 0.7798 |
| 0.3994 | 25.69 | 5600 | 0.5067 | 0.7887 | 0.7913 |
| 0.3975 | 26.61 | 5800 | 0.4957 | 0.7832 | 0.7858 |
| 0.3908 | 27.52 | 6000 | 0.5261 | 0.7785 | 0.7818 |
| 0.392 | 28.44 | 6200 | 0.4996 | 0.7841 | 0.7867 |
| 0.3887 | 29.36 | 6400 | 0.5025 | 0.7812 | 0.7844 |
| 0.3849 | 30.28 | 6600 | 0.5128 | 0.7742 | 0.7781 |
| 0.3824 | 31.19 | 6800 | 0.5134 | 0.7831 | 0.7858 |
| 0.3811 | 32.11 | 7000 | 0.5071 | 0.7820 | 0.7847 |
| 0.3815 | 33.03 | 7200 | 0.5100 | 0.7833 | 0.7858 |
| 0.3787 | 33.94 | 7400 | 0.5030 | 0.7860 | 0.7881 |
| 0.3754 | 34.86 | 7600 | 0.5084 | 0.7831 | 0.7858 |
| 0.373 | 35.78 | 7800 | 0.5119 | 0.7851 | 0.7876 |
| 0.3724 | 36.7 | 8000 | 0.5201 | 0.7838 | 0.7870 |
| 0.3728 | 37.61 | 8200 | 0.5250 | 0.7716 | 0.7758 |
| 0.3755 | 38.53 | 8400 | 0.5147 | 0.7827 | 0.7853 |
| 0.3679 | 39.45 | 8600 | 0.5214 | 0.7767 | 0.7804 |
| 0.3668 | 40.37 | 8800 | 0.5325 | 0.7759 | 0.7795 |
| 0.3704 | 41.28 | 9000 | 0.5190 | 0.7773 | 0.7807 |
| 0.3628 | 42.2 | 9200 | 0.5186 | 0.7821 | 0.7850 |
| 0.3624 | 43.12 | 9400 | 0.5233 | 0.7824 | 0.7853 |
| 0.3623 | 44.04 | 9600 | 0.5235 | 0.7788 | 0.7821 |
| 0.3628 | 44.95 | 9800 | 0.5291 | 0.7771 | 0.7807 |
| 0.3665 | 45.87 | 10000 | 0.5250 | 0.7809 | 0.7841 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_EMP_H3K36me3-seqsight_65536_512_47M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K36me3-seqsight_65536_512_47M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_65536_512_47M",
"region:us"
] | null | 2024-05-03T16:35:35+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us
| GUE\_EMP\_H3K36me3-seqsight\_65536\_512\_47M-L32\_f
===================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_65536\_512\_47M on the mahdibaghbanzadeh/GUE\_EMP\_H3K36me3 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5059
* F1 Score: 0.7826
* Accuracy: 0.7850
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_prom_prom_300_notata-seqsight_4096_512_15M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_15M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_15M) on the [mahdibaghbanzadeh/GUE_prom_prom_300_notata](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_300_notata) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1181
- F1 Score: 0.9561
- Accuracy: 0.9561
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.2897 | 0.6 | 200 | 0.1532 | 0.9391 | 0.9391 |
| 0.1588 | 1.2 | 400 | 0.1377 | 0.9472 | 0.9472 |
| 0.1482 | 1.81 | 600 | 0.1222 | 0.9521 | 0.9521 |
| 0.1344 | 2.41 | 800 | 0.1194 | 0.9534 | 0.9535 |
| 0.1341 | 3.01 | 1000 | 0.1171 | 0.9550 | 0.9550 |
| 0.1298 | 3.61 | 1200 | 0.1143 | 0.9559 | 0.9559 |
| 0.1276 | 4.22 | 1400 | 0.1130 | 0.9550 | 0.9550 |
| 0.1226 | 4.82 | 1600 | 0.1121 | 0.9572 | 0.9572 |
| 0.1228 | 5.42 | 1800 | 0.1124 | 0.9565 | 0.9565 |
| 0.1177 | 6.02 | 2000 | 0.1118 | 0.9540 | 0.9540 |
| 0.1176 | 6.63 | 2200 | 0.1085 | 0.9582 | 0.9582 |
| 0.1165 | 7.23 | 2400 | 0.1094 | 0.9585 | 0.9585 |
| 0.1131 | 7.83 | 2600 | 0.1084 | 0.9582 | 0.9582 |
| 0.1148 | 8.43 | 2800 | 0.1112 | 0.9576 | 0.9576 |
| 0.1169 | 9.04 | 3000 | 0.1116 | 0.9580 | 0.9580 |
| 0.1117 | 9.64 | 3200 | 0.1169 | 0.9550 | 0.9550 |
| 0.1118 | 10.24 | 3400 | 0.1079 | 0.9593 | 0.9593 |
| 0.1096 | 10.84 | 3600 | 0.1060 | 0.9582 | 0.9582 |
| 0.1065 | 11.45 | 3800 | 0.1081 | 0.9597 | 0.9597 |
| 0.1082 | 12.05 | 4000 | 0.1082 | 0.9585 | 0.9585 |
| 0.1083 | 12.65 | 4200 | 0.1047 | 0.9606 | 0.9606 |
| 0.1077 | 13.25 | 4400 | 0.1059 | 0.9600 | 0.9601 |
| 0.1055 | 13.86 | 4600 | 0.1057 | 0.9597 | 0.9597 |
| 0.1076 | 14.46 | 4800 | 0.1057 | 0.9587 | 0.9587 |
| 0.1028 | 15.06 | 5000 | 0.1041 | 0.9585 | 0.9585 |
| 0.1048 | 15.66 | 5200 | 0.1030 | 0.9597 | 0.9597 |
| 0.1058 | 16.27 | 5400 | 0.1035 | 0.9612 | 0.9612 |
| 0.1016 | 16.87 | 5600 | 0.1028 | 0.9608 | 0.9608 |
| 0.1014 | 17.47 | 5800 | 0.1027 | 0.9604 | 0.9604 |
| 0.1062 | 18.07 | 6000 | 0.1010 | 0.9616 | 0.9616 |
| 0.1016 | 18.67 | 6200 | 0.1018 | 0.9595 | 0.9595 |
| 0.1031 | 19.28 | 6400 | 0.1016 | 0.9606 | 0.9606 |
| 0.097 | 19.88 | 6600 | 0.1047 | 0.9615 | 0.9616 |
| 0.1034 | 20.48 | 6800 | 0.1034 | 0.9608 | 0.9608 |
| 0.0985 | 21.08 | 7000 | 0.1016 | 0.9614 | 0.9614 |
| 0.096 | 21.69 | 7200 | 0.1030 | 0.9612 | 0.9612 |
| 0.098 | 22.29 | 7400 | 0.1037 | 0.9601 | 0.9601 |
| 0.0999 | 22.89 | 7600 | 0.1004 | 0.9616 | 0.9616 |
| 0.097 | 23.49 | 7800 | 0.1019 | 0.9616 | 0.9616 |
| 0.0988 | 24.1 | 8000 | 0.1010 | 0.9614 | 0.9614 |
| 0.0962 | 24.7 | 8200 | 0.1023 | 0.9608 | 0.9608 |
| 0.0973 | 25.3 | 8400 | 0.1015 | 0.9612 | 0.9612 |
| 0.0944 | 25.9 | 8600 | 0.1014 | 0.9614 | 0.9614 |
| 0.0974 | 26.51 | 8800 | 0.1006 | 0.9619 | 0.9619 |
| 0.0968 | 27.11 | 9000 | 0.1001 | 0.9616 | 0.9616 |
| 0.0971 | 27.71 | 9200 | 0.1002 | 0.9614 | 0.9614 |
| 0.0929 | 28.31 | 9400 | 0.1005 | 0.9612 | 0.9612 |
| 0.0997 | 28.92 | 9600 | 0.1000 | 0.9619 | 0.9619 |
| 0.0931 | 29.52 | 9800 | 0.1003 | 0.9617 | 0.9617 |
| 0.0973 | 30.12 | 10000 | 0.1002 | 0.9614 | 0.9614 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_15M", "model-index": [{"name": "GUE_prom_prom_300_notata-seqsight_4096_512_15M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_prom_prom_300_notata-seqsight_4096_512_15M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_15M",
"region:us"
] | null | 2024-05-03T16:35:47+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us
| GUE\_prom\_prom\_300\_notata-seqsight\_4096\_512\_15M-L8\_f
===========================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_15M on the mahdibaghbanzadeh/GUE\_prom\_prom\_300\_notata dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1181
* F1 Score: 0.9561
* Accuracy: 0.9561
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_prom_prom_300_notata-seqsight_4096_512_15M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_15M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_15M) on the [mahdibaghbanzadeh/GUE_prom_prom_300_notata](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_300_notata) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1217
- F1 Score: 0.9582
- Accuracy: 0.9582
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.2508 | 0.6 | 200 | 0.1359 | 0.9487 | 0.9487 |
| 0.1422 | 1.2 | 400 | 0.1312 | 0.9479 | 0.9480 |
| 0.1383 | 1.81 | 600 | 0.1152 | 0.9548 | 0.9548 |
| 0.1266 | 2.41 | 800 | 0.1144 | 0.9572 | 0.9572 |
| 0.1273 | 3.01 | 1000 | 0.1097 | 0.9589 | 0.9589 |
| 0.1216 | 3.61 | 1200 | 0.1115 | 0.9591 | 0.9591 |
| 0.1195 | 4.22 | 1400 | 0.1070 | 0.9599 | 0.9599 |
| 0.1141 | 4.82 | 1600 | 0.1052 | 0.9597 | 0.9597 |
| 0.1124 | 5.42 | 1800 | 0.1060 | 0.9589 | 0.9589 |
| 0.1089 | 6.02 | 2000 | 0.1047 | 0.9585 | 0.9585 |
| 0.1082 | 6.63 | 2200 | 0.1061 | 0.9589 | 0.9589 |
| 0.1055 | 7.23 | 2400 | 0.1037 | 0.9591 | 0.9591 |
| 0.1022 | 7.83 | 2600 | 0.1004 | 0.9591 | 0.9591 |
| 0.1037 | 8.43 | 2800 | 0.1056 | 0.9617 | 0.9617 |
| 0.1061 | 9.04 | 3000 | 0.1053 | 0.9600 | 0.9601 |
| 0.0992 | 9.64 | 3200 | 0.1105 | 0.9584 | 0.9584 |
| 0.0982 | 10.24 | 3400 | 0.1002 | 0.9631 | 0.9631 |
| 0.0983 | 10.84 | 3600 | 0.0987 | 0.9608 | 0.9608 |
| 0.0929 | 11.45 | 3800 | 0.1052 | 0.9584 | 0.9584 |
| 0.0951 | 12.05 | 4000 | 0.1039 | 0.9608 | 0.9608 |
| 0.0946 | 12.65 | 4200 | 0.0979 | 0.9625 | 0.9625 |
| 0.0942 | 13.25 | 4400 | 0.1007 | 0.9625 | 0.9625 |
| 0.0916 | 13.86 | 4600 | 0.1005 | 0.9627 | 0.9627 |
| 0.0927 | 14.46 | 4800 | 0.1026 | 0.9612 | 0.9612 |
| 0.09 | 15.06 | 5000 | 0.1008 | 0.9623 | 0.9623 |
| 0.0889 | 15.66 | 5200 | 0.1002 | 0.9631 | 0.9631 |
| 0.0896 | 16.27 | 5400 | 0.1003 | 0.9631 | 0.9631 |
| 0.0856 | 16.87 | 5600 | 0.0988 | 0.9625 | 0.9625 |
| 0.0853 | 17.47 | 5800 | 0.1013 | 0.9606 | 0.9606 |
| 0.089 | 18.07 | 6000 | 0.0973 | 0.9631 | 0.9631 |
| 0.0842 | 18.67 | 6200 | 0.0977 | 0.9644 | 0.9644 |
| 0.0855 | 19.28 | 6400 | 0.1026 | 0.9627 | 0.9627 |
| 0.0804 | 19.88 | 6600 | 0.1034 | 0.9612 | 0.9612 |
| 0.0855 | 20.48 | 6800 | 0.1006 | 0.9640 | 0.9640 |
| 0.0813 | 21.08 | 7000 | 0.1014 | 0.9634 | 0.9634 |
| 0.0791 | 21.69 | 7200 | 0.1038 | 0.9606 | 0.9606 |
| 0.0798 | 22.29 | 7400 | 0.1026 | 0.9623 | 0.9623 |
| 0.0816 | 22.89 | 7600 | 0.0986 | 0.9629 | 0.9629 |
| 0.0788 | 23.49 | 7800 | 0.1031 | 0.9633 | 0.9633 |
| 0.0791 | 24.1 | 8000 | 0.1003 | 0.9636 | 0.9636 |
| 0.078 | 24.7 | 8200 | 0.1035 | 0.9632 | 0.9633 |
| 0.0779 | 25.3 | 8400 | 0.1004 | 0.9646 | 0.9646 |
| 0.0763 | 25.9 | 8600 | 0.1020 | 0.9631 | 0.9631 |
| 0.0785 | 26.51 | 8800 | 0.1004 | 0.9633 | 0.9633 |
| 0.0777 | 27.11 | 9000 | 0.0999 | 0.9631 | 0.9631 |
| 0.0782 | 27.71 | 9200 | 0.0999 | 0.9638 | 0.9638 |
| 0.0731 | 28.31 | 9400 | 0.1000 | 0.9634 | 0.9634 |
| 0.0791 | 28.92 | 9600 | 0.1005 | 0.9633 | 0.9633 |
| 0.0733 | 29.52 | 9800 | 0.1013 | 0.9631 | 0.9631 |
| 0.0759 | 30.12 | 10000 | 0.1009 | 0.9631 | 0.9631 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_15M", "model-index": [{"name": "GUE_prom_prom_300_notata-seqsight_4096_512_15M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_prom_prom_300_notata-seqsight_4096_512_15M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_15M",
"region:us"
] | null | 2024-05-03T16:36:08+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us
| GUE\_prom\_prom\_300\_notata-seqsight\_4096\_512\_15M-L32\_f
============================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_15M on the mahdibaghbanzadeh/GUE\_prom\_prom\_300\_notata dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1217
* F1 Score: 0.9582
* Accuracy: 0.9582
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | jeongmi/SOLAR_TG | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-05-03T16:36:28+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [prajjwal1/bert-tiny](https://huggingface.co/prajjwal1/bert-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3439
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.3017 | 1.0 | 1563 | 0.3571 |
| 0.2894 | 2.0 | 3126 | 0.3516 |
| 0.2804 | 3.0 | 4689 | 0.3439 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "mit", "tags": ["generated_from_trainer"], "base_model": "prajjwal1/bert-tiny", "model-index": [{"name": "results", "results": []}]} | soheill/results | null | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:prajjwal1/bert-tiny",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T16:36:31+00:00 | [] | [] | TAGS
#transformers #safetensors #bert #text-classification #generated_from_trainer #base_model-prajjwal1/bert-tiny #license-mit #autotrain_compatible #endpoints_compatible #region-us
| results
=======
This model is a fine-tuned version of prajjwal1/bert-tiny on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3439
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 64
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.40.1
* Pytorch 2.3.0+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.3.0+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #safetensors #bert #text-classification #generated_from_trainer #base_model-prajjwal1/bert-tiny #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.3.0+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
reinforcement-learning | null |
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="TeoGal/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
| {"tags": ["Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation"], "model-index": [{"name": "q-Taxi-v3", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "Taxi-v3", "type": "Taxi-v3"}, "metrics": [{"type": "mean_reward", "value": "7.50 +/- 2.76", "name": "mean_reward", "verified": false}]}]}]} | TeoGal/q-Taxi-v3 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | null | 2024-05-03T16:36:48+00:00 | [] | [] | TAGS
#Taxi-v3 #q-learning #reinforcement-learning #custom-implementation #model-index #region-us
|
# Q-Learning Agent playing1 Taxi-v3
This is a trained model of a Q-Learning agent playing Taxi-v3 .
## Usage
| [
"# Q-Learning Agent playing1 Taxi-v3\n This is a trained model of a Q-Learning agent playing Taxi-v3 .\n\n ## Usage"
] | [
"TAGS\n#Taxi-v3 #q-learning #reinforcement-learning #custom-implementation #model-index #region-us \n",
"# Q-Learning Agent playing1 Taxi-v3\n This is a trained model of a Q-Learning agent playing Taxi-v3 .\n\n ## Usage"
] |
automatic-speech-recognition | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | ezyyeah/murix-large-v3-1k-MERGED | null | [
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T16:37:32+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #whisper #automatic-speech-recognition #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #whisper #automatic-speech-recognition #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine_tuned_boolq_bert_croslo
This model is a fine-tuned version of [EMBEDDIA/crosloengual-bert](https://huggingface.co/EMBEDDIA/crosloengual-bert) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3270
- Accuracy: 0.8333
- F1: 0.8243
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 400
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-------:|:----:|:---------------:|:--------:|:------:|
| 0.5567 | 4.1667 | 50 | 0.5262 | 0.7222 | 0.6523 |
| 0.1098 | 8.3333 | 100 | 0.8949 | 0.8333 | 0.8243 |
| 0.0031 | 12.5 | 150 | 1.2237 | 0.7778 | 0.7778 |
| 0.0011 | 16.6667 | 200 | 1.2641 | 0.7778 | 0.7778 |
| 0.0008 | 20.8333 | 250 | 1.2343 | 0.8333 | 0.8243 |
| 0.0007 | 25.0 | 300 | 1.2852 | 0.8333 | 0.8243 |
| 0.0005 | 29.1667 | 350 | 1.3133 | 0.8333 | 0.8243 |
| 0.0005 | 33.3333 | 400 | 1.3270 | 0.8333 | 0.8243 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.3.0
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "cc-by-4.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "base_model": "EMBEDDIA/crosloengual-bert", "model-index": [{"name": "fine_tuned_boolq_bert_croslo", "results": []}]} | lenatr99/fine_tuned_boolq_bert_croslo | null | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:EMBEDDIA/crosloengual-bert",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T16:40:39+00:00 | [] | [] | TAGS
#transformers #safetensors #bert #text-classification #generated_from_trainer #base_model-EMBEDDIA/crosloengual-bert #license-cc-by-4.0 #autotrain_compatible #endpoints_compatible #region-us
| fine\_tuned\_boolq\_bert\_croslo
================================
This model is a fine-tuned version of EMBEDDIA/crosloengual-bert on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 1.3270
* Accuracy: 0.8333
* F1: 0.8243
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 400
### Training results
### Framework versions
* Transformers 4.40.1
* Pytorch 2.3.0
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 400",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.3.0\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #safetensors #bert #text-classification #generated_from_trainer #base_model-EMBEDDIA/crosloengual-bert #license-cc-by-4.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 400",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.3.0\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | cilantro9246/lji9v56 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-03T16:41:20+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_tf_3-seqsight_4096_512_15M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_15M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_15M) on the [mahdibaghbanzadeh/GUE_tf_3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_3) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5549
- F1 Score: 0.7132
- Accuracy: 0.716
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.634 | 0.93 | 200 | 0.5711 | 0.7179 | 0.718 |
| 0.5965 | 1.87 | 400 | 0.5680 | 0.6977 | 0.698 |
| 0.5884 | 2.8 | 600 | 0.5541 | 0.7093 | 0.712 |
| 0.5816 | 3.74 | 800 | 0.5569 | 0.7101 | 0.71 |
| 0.5801 | 4.67 | 1000 | 0.5477 | 0.7180 | 0.719 |
| 0.572 | 5.61 | 1200 | 0.5466 | 0.7244 | 0.725 |
| 0.5667 | 6.54 | 1400 | 0.5492 | 0.7268 | 0.727 |
| 0.5625 | 7.48 | 1600 | 0.5510 | 0.7201 | 0.721 |
| 0.5596 | 8.41 | 1800 | 0.5482 | 0.7151 | 0.715 |
| 0.5585 | 9.35 | 2000 | 0.5531 | 0.7097 | 0.712 |
| 0.5525 | 10.28 | 2200 | 0.5576 | 0.7141 | 0.714 |
| 0.5513 | 11.21 | 2400 | 0.5523 | 0.7223 | 0.723 |
| 0.5457 | 12.15 | 2600 | 0.5493 | 0.7081 | 0.711 |
| 0.5403 | 13.08 | 2800 | 0.5559 | 0.7157 | 0.72 |
| 0.5398 | 14.02 | 3000 | 0.5553 | 0.7123 | 0.713 |
| 0.5364 | 14.95 | 3200 | 0.5487 | 0.7163 | 0.718 |
| 0.533 | 15.89 | 3400 | 0.5521 | 0.7217 | 0.722 |
| 0.5299 | 16.82 | 3600 | 0.5576 | 0.7101 | 0.711 |
| 0.5268 | 17.76 | 3800 | 0.5548 | 0.7180 | 0.719 |
| 0.5263 | 18.69 | 4000 | 0.5564 | 0.7150 | 0.715 |
| 0.525 | 19.63 | 4200 | 0.5584 | 0.7231 | 0.725 |
| 0.5235 | 20.56 | 4400 | 0.5605 | 0.7337 | 0.734 |
| 0.5209 | 21.5 | 4600 | 0.5643 | 0.7158 | 0.716 |
| 0.5176 | 22.43 | 4800 | 0.5569 | 0.7192 | 0.721 |
| 0.517 | 23.36 | 5000 | 0.5774 | 0.7150 | 0.715 |
| 0.5139 | 24.3 | 5200 | 0.5743 | 0.7049 | 0.706 |
| 0.5084 | 25.23 | 5400 | 0.5705 | 0.7313 | 0.732 |
| 0.5141 | 26.17 | 5600 | 0.5550 | 0.7319 | 0.732 |
| 0.5062 | 27.1 | 5800 | 0.5617 | 0.7247 | 0.725 |
| 0.5043 | 28.04 | 6000 | 0.5665 | 0.7436 | 0.744 |
| 0.5019 | 28.97 | 6200 | 0.5656 | 0.7231 | 0.723 |
| 0.5054 | 29.91 | 6400 | 0.5654 | 0.7313 | 0.732 |
| 0.5034 | 30.84 | 6600 | 0.5657 | 0.7329 | 0.733 |
| 0.5017 | 31.78 | 6800 | 0.5651 | 0.7250 | 0.725 |
| 0.4963 | 32.71 | 7000 | 0.5708 | 0.7240 | 0.724 |
| 0.502 | 33.64 | 7200 | 0.5654 | 0.7260 | 0.726 |
| 0.4912 | 34.58 | 7400 | 0.5763 | 0.7117 | 0.712 |
| 0.495 | 35.51 | 7600 | 0.5726 | 0.7140 | 0.714 |
| 0.495 | 36.45 | 7800 | 0.5827 | 0.7230 | 0.723 |
| 0.4953 | 37.38 | 8000 | 0.5693 | 0.7211 | 0.721 |
| 0.4878 | 38.32 | 8200 | 0.5775 | 0.7261 | 0.726 |
| 0.4885 | 39.25 | 8400 | 0.5817 | 0.7281 | 0.728 |
| 0.4866 | 40.19 | 8600 | 0.5802 | 0.7191 | 0.719 |
| 0.4858 | 41.12 | 8800 | 0.5821 | 0.7221 | 0.722 |
| 0.4915 | 42.06 | 9000 | 0.5763 | 0.7201 | 0.72 |
| 0.4871 | 42.99 | 9200 | 0.5790 | 0.7130 | 0.713 |
| 0.4845 | 43.93 | 9400 | 0.5827 | 0.7120 | 0.712 |
| 0.4834 | 44.86 | 9600 | 0.5823 | 0.7141 | 0.714 |
| 0.4872 | 45.79 | 9800 | 0.5791 | 0.7201 | 0.72 |
| 0.4824 | 46.73 | 10000 | 0.5811 | 0.7141 | 0.714 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_15M", "model-index": [{"name": "GUE_tf_3-seqsight_4096_512_15M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_tf_3-seqsight_4096_512_15M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_15M",
"region:us"
] | null | 2024-05-03T16:41:31+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us
| GUE\_tf\_3-seqsight\_4096\_512\_15M-L32\_f
==========================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_15M on the mahdibaghbanzadeh/GUE\_tf\_3 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5549
* F1 Score: 0.7132
* Accuracy: 0.716
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Mistral-7B-Instruct-v0.2-finetune-SWE_90_10_EN
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0346
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.0356 | 0.9995 | 1855 | 1.4789 |
| 0.9073 | 1.9989 | 3710 | 1.4895 |
| 0.349 | 2.9984 | 5565 | 1.6255 |
| 0.2672 | 3.9978 | 7420 | 1.8033 |
| 0.341 | 4.9973 | 9275 | 2.0346 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1 | {"license": "apache-2.0", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "mistralai/Mistral-7B-Instruct-v0.2", "model-index": [{"name": "Mistral-7B-Instruct-v0.2-finetune-SWE_90_10_EN", "results": []}]} | JuanjoLopez19/Mistral-7B-Instruct-v0.2-finetune-SWE_90_10_EN | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"region:us"
] | null | 2024-05-03T16:43:05+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mistralai/Mistral-7B-Instruct-v0.2 #license-apache-2.0 #region-us
| Mistral-7B-Instruct-v0.2-finetune-SWE\_90\_10\_EN
=================================================
This model is a fine-tuned version of mistralai/Mistral-7B-Instruct-v0.2 on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 2.0346
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0001
* train\_batch\_size: 2
* eval\_batch\_size: 2
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: constant
* num\_epochs: 5
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* PEFT 0.10.0
* Transformers 4.40.1
* Pytorch 2.3.0+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 2\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: constant\n* num\\_epochs: 5\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.40.1\n* Pytorch 2.3.0+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mistralai/Mistral-7B-Instruct-v0.2 #license-apache-2.0 #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 2\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: constant\n* num\\_epochs: 5\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.40.1\n* Pytorch 2.3.0+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine_tuned_cb_croslo
This model is a fine-tuned version of [EMBEDDIA/crosloengual-bert](https://huggingface.co/EMBEDDIA/crosloengual-bert) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3249
- Accuracy: 0.6818
- F1: 0.6390
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 400
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-------:|:----:|:---------------:|:--------:|:------:|
| 0.7054 | 3.5714 | 50 | 1.4024 | 0.3182 | 0.1536 |
| 0.3117 | 7.1429 | 100 | 1.0030 | 0.6818 | 0.6383 |
| 0.0286 | 10.7143 | 150 | 1.0108 | 0.7273 | 0.6791 |
| 0.0038 | 14.2857 | 200 | 1.1886 | 0.6818 | 0.6390 |
| 0.0025 | 17.8571 | 250 | 1.2342 | 0.6818 | 0.6390 |
| 0.0019 | 21.4286 | 300 | 1.2576 | 0.7273 | 0.6791 |
| 0.0015 | 25.0 | 350 | 1.2963 | 0.6818 | 0.6390 |
| 0.0015 | 28.5714 | 400 | 1.3249 | 0.6818 | 0.6390 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.3.0
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "cc-by-4.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "base_model": "EMBEDDIA/crosloengual-bert", "model-index": [{"name": "fine_tuned_cb_croslo", "results": []}]} | lenatr99/fine_tuned_cb_croslo | null | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:EMBEDDIA/crosloengual-bert",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T16:44:34+00:00 | [] | [] | TAGS
#transformers #safetensors #bert #text-classification #generated_from_trainer #base_model-EMBEDDIA/crosloengual-bert #license-cc-by-4.0 #autotrain_compatible #endpoints_compatible #region-us
| fine\_tuned\_cb\_croslo
=======================
This model is a fine-tuned version of EMBEDDIA/crosloengual-bert on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 1.3249
* Accuracy: 0.6818
* F1: 0.6390
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 400
### Training results
### Framework versions
* Transformers 4.40.1
* Pytorch 2.3.0
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 400",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.3.0\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #safetensors #bert #text-classification #generated_from_trainer #base_model-EMBEDDIA/crosloengual-bert #license-cc-by-4.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 400",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.3.0\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
token-classification | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | chanwoopark/roberta-link | null | [
"transformers",
"safetensors",
"roberta",
"token-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T16:44:36+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #roberta #token-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #roberta #token-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
translation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-egyAr-eng_fineTuned
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-ar-en](https://huggingface.co/Helsinki-NLP/opus-mt-ar-en) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1953
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.4904 | 0.06 | 20 | 2.5405 |
| 2.4189 | 0.13 | 40 | 2.2441 |
| 2.4096 | 0.19 | 60 | 2.0196 |
| 2.0011 | 0.25 | 80 | 1.9210 |
| 1.8648 | 0.32 | 100 | 1.9143 |
| 2.1312 | 0.38 | 120 | 1.7299 |
| 1.8335 | 0.44 | 140 | 1.7309 |
| 1.8953 | 0.51 | 160 | 1.6637 |
| 1.7433 | 0.57 | 180 | 1.6094 |
| 1.6717 | 0.63 | 200 | 1.6277 |
| 1.8456 | 0.69 | 220 | 1.5190 |
| 1.5594 | 0.76 | 240 | 1.5161 |
| 1.7027 | 0.82 | 260 | 1.4832 |
| 1.5024 | 0.88 | 280 | 1.4489 |
| 1.4542 | 0.95 | 300 | 1.4940 |
| 1.6944 | 1.01 | 320 | 1.4590 |
| 1.1835 | 1.07 | 340 | 1.4306 |
| 0.9745 | 1.14 | 360 | 1.4373 |
| 1.2107 | 1.2 | 380 | 1.3939 |
| 1.0052 | 1.26 | 400 | 1.3884 |
| 1.0351 | 1.33 | 420 | 1.3911 |
| 1.145 | 1.39 | 440 | 1.3541 |
| 0.9529 | 1.45 | 460 | 1.3534 |
| 1.1718 | 1.52 | 480 | 1.3090 |
| 0.9735 | 1.58 | 500 | 1.3072 |
| 1.0636 | 1.64 | 520 | 1.2980 |
| 1.0589 | 1.7 | 540 | 1.2669 |
| 0.8511 | 1.77 | 560 | 1.2689 |
| 1.1347 | 1.83 | 580 | 1.2328 |
| 0.894 | 1.89 | 600 | 1.2335 |
| 0.9555 | 1.96 | 620 | 1.2270 |
| 0.9605 | 2.02 | 640 | 1.2204 |
| 0.6781 | 2.08 | 660 | 1.2287 |
| 0.5184 | 2.15 | 680 | 1.2452 |
| 0.7999 | 2.21 | 700 | 1.2139 |
| 0.5561 | 2.27 | 720 | 1.2213 |
| 0.7168 | 2.34 | 740 | 1.2141 |
| 0.648 | 2.4 | 760 | 1.2071 |
| 0.4999 | 2.46 | 780 | 1.2127 |
| 0.7798 | 2.53 | 800 | 1.2054 |
| 0.5454 | 2.59 | 820 | 1.2017 |
| 0.7056 | 2.65 | 840 | 1.2016 |
| 0.6453 | 2.72 | 860 | 1.1961 |
| 0.5223 | 2.78 | 880 | 1.1964 |
| 0.7595 | 2.84 | 900 | 1.1958 |
| 0.5751 | 2.9 | 920 | 1.1952 |
| 0.6057 | 2.97 | 940 | 1.1953 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2 | {"language": ["ar", "en"], "license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "Helsinki-NLP/opus-mt-ar-en", "pipeline_tag": "translation", "model-index": [{"name": "opus-mt-egyAr-eng_fineTuned", "results": []}]} | Amr-khaled/opus-mt-egyAr-eng_fineTuned | null | [
"transformers",
"safetensors",
"marian",
"text2text-generation",
"generated_from_trainer",
"translation",
"ar",
"en",
"base_model:Helsinki-NLP/opus-mt-ar-en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T16:45:20+00:00 | [] | [
"ar",
"en"
] | TAGS
#transformers #safetensors #marian #text2text-generation #generated_from_trainer #translation #ar #en #base_model-Helsinki-NLP/opus-mt-ar-en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| opus-mt-egyAr-eng\_fineTuned
============================
This model is a fine-tuned version of Helsinki-NLP/opus-mt-ar-en on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 1.1953
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0003
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: cosine
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.39.3
* Pytorch 2.1.2
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.1.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #safetensors #marian #text2text-generation #generated_from_trainer #translation #ar #en #base_model-Helsinki-NLP/opus-mt-ar-en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.1.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_mouse_0-seqsight_65536_512_47M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_mouse_0](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_0) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6597
- F1 Score: 0.7055
- Accuracy: 0.7062
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.637 | 3.92 | 200 | 0.5948 | 0.6698 | 0.6716 |
| 0.5893 | 7.84 | 400 | 0.6008 | 0.6755 | 0.6778 |
| 0.5697 | 11.76 | 600 | 0.5872 | 0.6839 | 0.6840 |
| 0.5542 | 15.69 | 800 | 0.5847 | 0.7043 | 0.7049 |
| 0.5394 | 19.61 | 1000 | 0.5803 | 0.7124 | 0.7123 |
| 0.5317 | 23.53 | 1200 | 0.5817 | 0.7004 | 0.7037 |
| 0.5174 | 27.45 | 1400 | 0.5746 | 0.7191 | 0.7198 |
| 0.5127 | 31.37 | 1600 | 0.5709 | 0.7062 | 0.7062 |
| 0.506 | 35.29 | 1800 | 0.5635 | 0.7111 | 0.7111 |
| 0.4972 | 39.22 | 2000 | 0.5585 | 0.7152 | 0.7160 |
| 0.4881 | 43.14 | 2200 | 0.5657 | 0.7172 | 0.7173 |
| 0.4855 | 47.06 | 2400 | 0.5607 | 0.7106 | 0.7111 |
| 0.4771 | 50.98 | 2600 | 0.5740 | 0.7143 | 0.7148 |
| 0.4701 | 54.9 | 2800 | 0.5698 | 0.7226 | 0.7235 |
| 0.4655 | 58.82 | 3000 | 0.5727 | 0.7231 | 0.7235 |
| 0.4592 | 62.75 | 3200 | 0.5749 | 0.7205 | 0.7210 |
| 0.4511 | 66.67 | 3400 | 0.5821 | 0.7191 | 0.7198 |
| 0.4484 | 70.59 | 3600 | 0.5665 | 0.7285 | 0.7296 |
| 0.4467 | 74.51 | 3800 | 0.5741 | 0.7295 | 0.7296 |
| 0.4389 | 78.43 | 4000 | 0.5775 | 0.7244 | 0.7247 |
| 0.438 | 82.35 | 4200 | 0.5870 | 0.7269 | 0.7284 |
| 0.4334 | 86.27 | 4400 | 0.5802 | 0.7342 | 0.7346 |
| 0.4261 | 90.2 | 4600 | 0.5829 | 0.7297 | 0.7296 |
| 0.4196 | 94.12 | 4800 | 0.5916 | 0.7281 | 0.7284 |
| 0.4167 | 98.04 | 5000 | 0.5844 | 0.7228 | 0.7235 |
| 0.4091 | 101.96 | 5200 | 0.5934 | 0.7355 | 0.7358 |
| 0.4099 | 105.88 | 5400 | 0.5895 | 0.7308 | 0.7309 |
| 0.4054 | 109.8 | 5600 | 0.5939 | 0.7294 | 0.7296 |
| 0.4027 | 113.73 | 5800 | 0.6007 | 0.7292 | 0.7296 |
| 0.4029 | 117.65 | 6000 | 0.5960 | 0.7247 | 0.7247 |
| 0.3937 | 121.57 | 6200 | 0.6040 | 0.7210 | 0.7210 |
| 0.3941 | 125.49 | 6400 | 0.6091 | 0.7223 | 0.7222 |
| 0.3917 | 129.41 | 6600 | 0.6112 | 0.7235 | 0.7235 |
| 0.3885 | 133.33 | 6800 | 0.6028 | 0.7284 | 0.7284 |
| 0.3852 | 137.25 | 7000 | 0.6154 | 0.7296 | 0.7296 |
| 0.3781 | 141.18 | 7200 | 0.6169 | 0.7235 | 0.7235 |
| 0.3756 | 145.1 | 7400 | 0.6242 | 0.7319 | 0.7321 |
| 0.3779 | 149.02 | 7600 | 0.6144 | 0.7272 | 0.7272 |
| 0.3764 | 152.94 | 7800 | 0.6155 | 0.7308 | 0.7309 |
| 0.37 | 156.86 | 8000 | 0.6209 | 0.7283 | 0.7284 |
| 0.3706 | 160.78 | 8200 | 0.6228 | 0.7283 | 0.7284 |
| 0.369 | 164.71 | 8400 | 0.6290 | 0.7247 | 0.7247 |
| 0.3634 | 168.63 | 8600 | 0.6289 | 0.7222 | 0.7222 |
| 0.3653 | 172.55 | 8800 | 0.6240 | 0.7246 | 0.7247 |
| 0.3662 | 176.47 | 9000 | 0.6260 | 0.7233 | 0.7235 |
| 0.3559 | 180.39 | 9200 | 0.6308 | 0.7197 | 0.7198 |
| 0.3618 | 184.31 | 9400 | 0.6311 | 0.7284 | 0.7284 |
| 0.3634 | 188.24 | 9600 | 0.6294 | 0.7284 | 0.7284 |
| 0.3661 | 192.16 | 9800 | 0.6271 | 0.7272 | 0.7272 |
| 0.3576 | 196.08 | 10000 | 0.6275 | 0.7272 | 0.7272 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_mouse_0-seqsight_65536_512_47M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_mouse_0-seqsight_65536_512_47M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_65536_512_47M",
"region:us"
] | null | 2024-05-03T16:45:23+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us
| GUE\_mouse\_0-seqsight\_65536\_512\_47M-L8\_f
=============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_65536\_512\_47M on the mahdibaghbanzadeh/GUE\_mouse\_0 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.6597
* F1 Score: 0.7055
* Accuracy: 0.7062
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_mouse_0-seqsight_65536_512_47M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_mouse_0](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_0) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9070
- F1 Score: 0.7069
- Accuracy: 0.7074
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.6249 | 3.92 | 200 | 0.5924 | 0.6809 | 0.6815 |
| 0.5702 | 7.84 | 400 | 0.5896 | 0.6887 | 0.6889 |
| 0.5451 | 11.76 | 600 | 0.5648 | 0.6922 | 0.6938 |
| 0.5224 | 15.69 | 800 | 0.5642 | 0.7134 | 0.7173 |
| 0.5028 | 19.61 | 1000 | 0.5586 | 0.7185 | 0.7185 |
| 0.4866 | 23.53 | 1200 | 0.5690 | 0.7193 | 0.7222 |
| 0.4641 | 27.45 | 1400 | 0.5686 | 0.7400 | 0.7407 |
| 0.4489 | 31.37 | 1600 | 0.5830 | 0.7165 | 0.7173 |
| 0.4304 | 35.29 | 1800 | 0.5761 | 0.7270 | 0.7272 |
| 0.4077 | 39.22 | 2000 | 0.5861 | 0.7297 | 0.7296 |
| 0.3854 | 43.14 | 2200 | 0.6005 | 0.7380 | 0.7383 |
| 0.3724 | 47.06 | 2400 | 0.5956 | 0.7272 | 0.7284 |
| 0.3489 | 50.98 | 2600 | 0.6159 | 0.7297 | 0.7296 |
| 0.3304 | 54.9 | 2800 | 0.6671 | 0.7315 | 0.7333 |
| 0.3153 | 58.82 | 3000 | 0.6739 | 0.7260 | 0.7259 |
| 0.3025 | 62.75 | 3200 | 0.7029 | 0.7280 | 0.7284 |
| 0.2875 | 66.67 | 3400 | 0.6816 | 0.7295 | 0.7296 |
| 0.2738 | 70.59 | 3600 | 0.6824 | 0.7282 | 0.7284 |
| 0.2614 | 74.51 | 3800 | 0.7269 | 0.7351 | 0.7358 |
| 0.2571 | 78.43 | 4000 | 0.7406 | 0.7369 | 0.7370 |
| 0.2395 | 82.35 | 4200 | 0.7667 | 0.7331 | 0.7333 |
| 0.238 | 86.27 | 4400 | 0.7654 | 0.7382 | 0.7395 |
| 0.2193 | 90.2 | 4600 | 0.7736 | 0.7281 | 0.7284 |
| 0.2125 | 94.12 | 4800 | 0.7860 | 0.7234 | 0.7235 |
| 0.21 | 98.04 | 5000 | 0.7801 | 0.7479 | 0.7481 |
| 0.1949 | 101.96 | 5200 | 0.8131 | 0.7366 | 0.7370 |
| 0.1947 | 105.88 | 5400 | 0.8441 | 0.7407 | 0.7407 |
| 0.1882 | 109.8 | 5600 | 0.8412 | 0.7382 | 0.7383 |
| 0.1851 | 113.73 | 5800 | 0.8371 | 0.7302 | 0.7309 |
| 0.1775 | 117.65 | 6000 | 0.8648 | 0.7358 | 0.7358 |
| 0.169 | 121.57 | 6200 | 0.8611 | 0.7346 | 0.7346 |
| 0.1666 | 125.49 | 6400 | 0.8923 | 0.7393 | 0.7395 |
| 0.1665 | 129.41 | 6600 | 0.8906 | 0.7333 | 0.7333 |
| 0.1598 | 133.33 | 6800 | 0.9035 | 0.7345 | 0.7346 |
| 0.1537 | 137.25 | 7000 | 0.9237 | 0.7405 | 0.7407 |
| 0.1541 | 141.18 | 7200 | 0.9118 | 0.7383 | 0.7383 |
| 0.1502 | 145.1 | 7400 | 0.9269 | 0.7419 | 0.7420 |
| 0.1474 | 149.02 | 7600 | 0.9470 | 0.7420 | 0.7420 |
| 0.147 | 152.94 | 7800 | 0.9501 | 0.7395 | 0.7395 |
| 0.1378 | 156.86 | 8000 | 0.9572 | 0.7382 | 0.7383 |
| 0.1426 | 160.78 | 8200 | 0.9603 | 0.7296 | 0.7296 |
| 0.1351 | 164.71 | 8400 | 0.9646 | 0.7320 | 0.7321 |
| 0.1355 | 168.63 | 8600 | 0.9647 | 0.7346 | 0.7346 |
| 0.1327 | 172.55 | 8800 | 0.9743 | 0.7308 | 0.7309 |
| 0.1279 | 176.47 | 9000 | 0.9963 | 0.7356 | 0.7358 |
| 0.1228 | 180.39 | 9200 | 1.0062 | 0.7333 | 0.7333 |
| 0.128 | 184.31 | 9400 | 1.0003 | 0.7395 | 0.7395 |
| 0.1317 | 188.24 | 9600 | 0.9883 | 0.7370 | 0.7370 |
| 0.13 | 192.16 | 9800 | 0.9884 | 0.7382 | 0.7383 |
| 0.132 | 196.08 | 10000 | 0.9877 | 0.7370 | 0.7370 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_mouse_0-seqsight_65536_512_47M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_mouse_0-seqsight_65536_512_47M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_65536_512_47M",
"region:us"
] | null | 2024-05-03T16:45:34+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us
| GUE\_mouse\_0-seqsight\_65536\_512\_47M-L32\_f
==============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_65536\_512\_47M on the mahdibaghbanzadeh/GUE\_mouse\_0 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.9070
* F1 Score: 0.7069
* Accuracy: 0.7074
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_mouse_1-seqsight_65536_512_47M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_mouse_1](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_1) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2654
- F1 Score: 0.8818
- Accuracy: 0.8818
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5064 | 0.47 | 200 | 0.4045 | 0.8148 | 0.8148 |
| 0.4235 | 0.95 | 400 | 0.3718 | 0.8328 | 0.8329 |
| 0.3838 | 1.42 | 600 | 0.3340 | 0.8497 | 0.8497 |
| 0.3579 | 1.9 | 800 | 0.3136 | 0.8578 | 0.8578 |
| 0.3359 | 2.37 | 1000 | 0.3129 | 0.8583 | 0.8583 |
| 0.3323 | 2.84 | 1200 | 0.3049 | 0.8639 | 0.8639 |
| 0.3184 | 3.32 | 1400 | 0.2993 | 0.8652 | 0.8652 |
| 0.3257 | 3.79 | 1600 | 0.3028 | 0.8661 | 0.8661 |
| 0.3233 | 4.27 | 1800 | 0.2905 | 0.8711 | 0.8713 |
| 0.3164 | 4.74 | 2000 | 0.2929 | 0.8693 | 0.8694 |
| 0.3168 | 5.21 | 2200 | 0.2850 | 0.8743 | 0.8744 |
| 0.313 | 5.69 | 2400 | 0.2868 | 0.8756 | 0.8756 |
| 0.3079 | 6.16 | 2600 | 0.2849 | 0.8753 | 0.8755 |
| 0.309 | 6.64 | 2800 | 0.2803 | 0.8791 | 0.8792 |
| 0.3075 | 7.11 | 3000 | 0.2860 | 0.8749 | 0.8749 |
| 0.3055 | 7.58 | 3200 | 0.2868 | 0.8738 | 0.8738 |
| 0.305 | 8.06 | 3400 | 0.2796 | 0.8756 | 0.8758 |
| 0.3023 | 8.53 | 3600 | 0.2840 | 0.8762 | 0.8762 |
| 0.3034 | 9.0 | 3800 | 0.2817 | 0.8782 | 0.8783 |
| 0.3017 | 9.48 | 4000 | 0.2795 | 0.8778 | 0.8780 |
| 0.2989 | 9.95 | 4200 | 0.2760 | 0.8782 | 0.8783 |
| 0.2969 | 10.43 | 4400 | 0.2771 | 0.8778 | 0.8778 |
| 0.298 | 10.9 | 4600 | 0.2745 | 0.8794 | 0.8795 |
| 0.2895 | 11.37 | 4800 | 0.2783 | 0.8784 | 0.8784 |
| 0.3009 | 11.85 | 5000 | 0.2740 | 0.8798 | 0.8799 |
| 0.2939 | 12.32 | 5200 | 0.2781 | 0.8799 | 0.8799 |
| 0.2961 | 12.8 | 5400 | 0.2783 | 0.8793 | 0.8793 |
| 0.2931 | 13.27 | 5600 | 0.2719 | 0.8810 | 0.8811 |
| 0.2878 | 13.74 | 5800 | 0.2746 | 0.8791 | 0.8792 |
| 0.2924 | 14.22 | 6000 | 0.2695 | 0.8809 | 0.8809 |
| 0.2862 | 14.69 | 6200 | 0.2703 | 0.8812 | 0.8812 |
| 0.2925 | 15.17 | 6400 | 0.2712 | 0.8826 | 0.8826 |
| 0.2901 | 15.64 | 6600 | 0.2690 | 0.8806 | 0.8807 |
| 0.2855 | 16.11 | 6800 | 0.2670 | 0.8815 | 0.8815 |
| 0.2842 | 16.59 | 7000 | 0.2644 | 0.8824 | 0.8824 |
| 0.2824 | 17.06 | 7200 | 0.2654 | 0.8815 | 0.8815 |
| 0.2861 | 17.54 | 7400 | 0.2664 | 0.8817 | 0.8817 |
| 0.2867 | 18.01 | 7600 | 0.2644 | 0.8840 | 0.8841 |
| 0.2816 | 18.48 | 7800 | 0.2657 | 0.8814 | 0.8814 |
| 0.2876 | 18.96 | 8000 | 0.2633 | 0.8827 | 0.8827 |
| 0.2851 | 19.43 | 8200 | 0.2655 | 0.8820 | 0.8820 |
| 0.2818 | 19.91 | 8400 | 0.2633 | 0.8853 | 0.8854 |
| 0.2851 | 20.38 | 8600 | 0.2637 | 0.8828 | 0.8829 |
| 0.2792 | 20.85 | 8800 | 0.2631 | 0.8856 | 0.8857 |
| 0.2798 | 21.33 | 9000 | 0.2632 | 0.8836 | 0.8836 |
| 0.2814 | 21.8 | 9200 | 0.2650 | 0.8821 | 0.8821 |
| 0.2852 | 22.27 | 9400 | 0.2626 | 0.8830 | 0.8830 |
| 0.2787 | 22.75 | 9600 | 0.2634 | 0.8853 | 0.8854 |
| 0.2794 | 23.22 | 9800 | 0.2643 | 0.8843 | 0.8844 |
| 0.2793 | 23.7 | 10000 | 0.2642 | 0.8839 | 0.8839 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_mouse_1-seqsight_65536_512_47M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_mouse_1-seqsight_65536_512_47M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_65536_512_47M",
"region:us"
] | null | 2024-05-03T16:45:57+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us
| GUE\_mouse\_1-seqsight\_65536\_512\_47M-L1\_f
=============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_65536\_512\_47M on the mahdibaghbanzadeh/GUE\_mouse\_1 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2654
* F1 Score: 0.8818
* Accuracy: 0.8818
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | Ayu14/mistral_7b_guanaco | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T16:46:03+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_mouse_1-seqsight_65536_512_47M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_mouse_1](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_1) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2451
- F1 Score: 0.8914
- Accuracy: 0.8915
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.4716 | 0.47 | 200 | 0.3665 | 0.8341 | 0.8342 |
| 0.3645 | 0.95 | 400 | 0.3228 | 0.8565 | 0.8566 |
| 0.3288 | 1.42 | 600 | 0.3039 | 0.8642 | 0.8642 |
| 0.3258 | 1.9 | 800 | 0.2954 | 0.8700 | 0.8700 |
| 0.312 | 2.37 | 1000 | 0.2922 | 0.8701 | 0.8701 |
| 0.3096 | 2.84 | 1200 | 0.2945 | 0.8731 | 0.8731 |
| 0.2987 | 3.32 | 1400 | 0.2809 | 0.8756 | 0.8756 |
| 0.306 | 3.79 | 1600 | 0.2991 | 0.8702 | 0.8703 |
| 0.3005 | 4.27 | 1800 | 0.2755 | 0.8777 | 0.8780 |
| 0.2953 | 4.74 | 2000 | 0.2796 | 0.8785 | 0.8786 |
| 0.2946 | 5.21 | 2200 | 0.2743 | 0.8788 | 0.8792 |
| 0.289 | 5.69 | 2400 | 0.2729 | 0.8826 | 0.8826 |
| 0.2848 | 6.16 | 2600 | 0.2693 | 0.8805 | 0.8808 |
| 0.2831 | 6.64 | 2800 | 0.2664 | 0.8846 | 0.8847 |
| 0.284 | 7.11 | 3000 | 0.2727 | 0.8850 | 0.8850 |
| 0.2786 | 7.58 | 3200 | 0.2695 | 0.8864 | 0.8864 |
| 0.2777 | 8.06 | 3400 | 0.2596 | 0.8851 | 0.8852 |
| 0.2727 | 8.53 | 3600 | 0.2778 | 0.8823 | 0.8823 |
| 0.2793 | 9.0 | 3800 | 0.2641 | 0.8875 | 0.8876 |
| 0.2712 | 9.48 | 4000 | 0.2614 | 0.8876 | 0.8878 |
| 0.2719 | 9.95 | 4200 | 0.2567 | 0.8890 | 0.8891 |
| 0.2689 | 10.43 | 4400 | 0.2578 | 0.8900 | 0.8900 |
| 0.2687 | 10.9 | 4600 | 0.2566 | 0.8919 | 0.8919 |
| 0.2618 | 11.37 | 4800 | 0.2680 | 0.8852 | 0.8852 |
| 0.271 | 11.85 | 5000 | 0.2547 | 0.8921 | 0.8921 |
| 0.2626 | 12.32 | 5200 | 0.2607 | 0.8870 | 0.8870 |
| 0.2647 | 12.8 | 5400 | 0.2639 | 0.8867 | 0.8867 |
| 0.2638 | 13.27 | 5600 | 0.2513 | 0.8917 | 0.8918 |
| 0.2571 | 13.74 | 5800 | 0.2536 | 0.8924 | 0.8925 |
| 0.2594 | 14.22 | 6000 | 0.2541 | 0.8912 | 0.8912 |
| 0.2559 | 14.69 | 6200 | 0.2528 | 0.8926 | 0.8927 |
| 0.2583 | 15.17 | 6400 | 0.2548 | 0.8909 | 0.8909 |
| 0.2589 | 15.64 | 6600 | 0.2536 | 0.8909 | 0.8909 |
| 0.2524 | 16.11 | 6800 | 0.2498 | 0.8928 | 0.8928 |
| 0.2516 | 16.59 | 7000 | 0.2497 | 0.8919 | 0.8919 |
| 0.2538 | 17.06 | 7200 | 0.2479 | 0.8916 | 0.8916 |
| 0.2527 | 17.54 | 7400 | 0.2513 | 0.8913 | 0.8913 |
| 0.2546 | 18.01 | 7600 | 0.2455 | 0.8922 | 0.8922 |
| 0.2471 | 18.48 | 7800 | 0.2491 | 0.8918 | 0.8918 |
| 0.2562 | 18.96 | 8000 | 0.2459 | 0.8925 | 0.8925 |
| 0.253 | 19.43 | 8200 | 0.2487 | 0.8919 | 0.8919 |
| 0.2513 | 19.91 | 8400 | 0.2445 | 0.8948 | 0.8949 |
| 0.2523 | 20.38 | 8600 | 0.2480 | 0.8925 | 0.8925 |
| 0.2474 | 20.85 | 8800 | 0.2449 | 0.8937 | 0.8937 |
| 0.2465 | 21.33 | 9000 | 0.2476 | 0.8927 | 0.8927 |
| 0.2493 | 21.8 | 9200 | 0.2484 | 0.8925 | 0.8925 |
| 0.2532 | 22.27 | 9400 | 0.2457 | 0.8935 | 0.8936 |
| 0.2431 | 22.75 | 9600 | 0.2467 | 0.8922 | 0.8922 |
| 0.2476 | 23.22 | 9800 | 0.2477 | 0.8919 | 0.8919 |
| 0.2482 | 23.7 | 10000 | 0.2472 | 0.8921 | 0.8921 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_mouse_1-seqsight_65536_512_47M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_mouse_1-seqsight_65536_512_47M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_65536_512_47M",
"region:us"
] | null | 2024-05-03T16:46:06+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us
| GUE\_mouse\_1-seqsight\_65536\_512\_47M-L8\_f
=============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_65536\_512\_47M on the mahdibaghbanzadeh/GUE\_mouse\_1 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2451
* F1 Score: 0.8914
* Accuracy: 0.8915
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | transformers |
# Uploaded model
- **Developed by:** xsa-dev
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-bnb-4bit"} | xsa-dev/hugs_llama3_technique_ft_lora | null | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T16:47:09+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: xsa-dev
- License: apache-2.0
- Finetuned from model : unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: xsa-dev\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: xsa-dev\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | arthrod/jaera | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-03T16:47:14+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# TweetRobertaNewDataset
This model is a fine-tuned version of [AndreiUrsu/TweetRoberta_5epochs](https://huggingface.co/AndreiUrsu/TweetRoberta_5epochs) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0000
- Accuracy: 1.0
- F1: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---:|
| 0.0 | 1.0 | 1000 | 0.0000 | 1.0 | 1.0 |
| 0.0 | 2.0 | 2000 | 0.0000 | 1.0 | 1.0 |
| 0.0 | 3.0 | 3000 | 0.0000 | 1.0 | 1.0 |
| 0.0 | 4.0 | 4000 | 0.0000 | 1.0 | 1.0 |
| 0.0 | 5.0 | 5000 | 0.0000 | 1.0 | 1.0 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "base_model": "AndreiUrsu/TweetRoberta_5epochs", "model-index": [{"name": "TweetRobertaNewDataset", "results": []}]} | AndreiUrsu/TweetRobertaNewDataset | null | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:AndreiUrsu/TweetRoberta_5epochs",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T16:47:22+00:00 | [] | [] | TAGS
#transformers #safetensors #roberta #text-classification #generated_from_trainer #base_model-AndreiUrsu/TweetRoberta_5epochs #autotrain_compatible #endpoints_compatible #region-us
| TweetRobertaNewDataset
======================
This model is a fine-tuned version of AndreiUrsu/TweetRoberta\_5epochs on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.0000
* Accuracy: 1.0
* F1: 1.0
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 500
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.40.1
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #safetensors #roberta #text-classification #generated_from_trainer #base_model-AndreiUrsu/TweetRoberta_5epochs #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# robust_llm_pythia-160m_niki-041a_imdb_random-token-1280_10-rounds_seed-1
This model is a fine-tuned version of [EleutherAI/pythia-160m](https://huggingface.co/EleutherAI/pythia-160m) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 1
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "EleutherAI/pythia-160m", "model-index": [{"name": "robust_llm_pythia-160m_niki-041a_imdb_random-token-1280_10-rounds_seed-1", "results": []}]} | AlignmentResearch/robust_llm_pythia-160m_niki-041a_imdb_random-token-1280_10-rounds_seed-1 | null | [
"transformers",
"tensorboard",
"safetensors",
"gpt_neox",
"text-classification",
"generated_from_trainer",
"base_model:EleutherAI/pythia-160m",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-03T16:48:25+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #gpt_neox #text-classification #generated_from_trainer #base_model-EleutherAI/pythia-160m #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# robust_llm_pythia-160m_niki-041a_imdb_random-token-1280_10-rounds_seed-1
This model is a fine-tuned version of EleutherAI/pythia-160m on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 1
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1
- Datasets 2.18.0
- Tokenizers 0.15.2
| [
"# robust_llm_pythia-160m_niki-041a_imdb_random-token-1280_10-rounds_seed-1\n\nThis model is a fine-tuned version of EleutherAI/pythia-160m on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 1\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.1\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #gpt_neox #text-classification #generated_from_trainer #base_model-EleutherAI/pythia-160m #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# robust_llm_pythia-160m_niki-041a_imdb_random-token-1280_10-rounds_seed-1\n\nThis model is a fine-tuned version of EleutherAI/pythia-160m on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 1\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.1\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_mouse_1-seqsight_65536_512_47M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_mouse_1](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_1) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2380
- F1 Score: 0.8971
- Accuracy: 0.8971
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.4424 | 0.47 | 200 | 0.3542 | 0.8397 | 0.8402 |
| 0.3385 | 0.95 | 400 | 0.3072 | 0.8628 | 0.8629 |
| 0.3132 | 1.42 | 600 | 0.2880 | 0.8750 | 0.8750 |
| 0.3121 | 1.9 | 800 | 0.2883 | 0.8762 | 0.8762 |
| 0.2973 | 2.37 | 1000 | 0.2811 | 0.8780 | 0.8780 |
| 0.2919 | 2.84 | 1200 | 0.2791 | 0.8803 | 0.8804 |
| 0.2792 | 3.32 | 1400 | 0.2647 | 0.8834 | 0.8835 |
| 0.2846 | 3.79 | 1600 | 0.2777 | 0.8788 | 0.8789 |
| 0.2762 | 4.27 | 1800 | 0.2560 | 0.8894 | 0.8895 |
| 0.2699 | 4.74 | 2000 | 0.2609 | 0.8870 | 0.8870 |
| 0.2666 | 5.21 | 2200 | 0.2545 | 0.8863 | 0.8866 |
| 0.2605 | 5.69 | 2400 | 0.2619 | 0.8861 | 0.8861 |
| 0.2555 | 6.16 | 2600 | 0.2478 | 0.8929 | 0.8931 |
| 0.2557 | 6.64 | 2800 | 0.2490 | 0.8922 | 0.8922 |
| 0.2545 | 7.11 | 3000 | 0.2513 | 0.8930 | 0.8930 |
| 0.2466 | 7.58 | 3200 | 0.2533 | 0.8932 | 0.8933 |
| 0.2482 | 8.06 | 3400 | 0.2414 | 0.8944 | 0.8944 |
| 0.2422 | 8.53 | 3600 | 0.2517 | 0.8940 | 0.8940 |
| 0.2502 | 9.0 | 3800 | 0.2511 | 0.8932 | 0.8933 |
| 0.2417 | 9.48 | 4000 | 0.2431 | 0.8978 | 0.8979 |
| 0.2428 | 9.95 | 4200 | 0.2401 | 0.8966 | 0.8967 |
| 0.2404 | 10.43 | 4400 | 0.2427 | 0.8953 | 0.8953 |
| 0.2394 | 10.9 | 4600 | 0.2418 | 0.8949 | 0.8949 |
| 0.2349 | 11.37 | 4800 | 0.2500 | 0.8916 | 0.8916 |
| 0.2407 | 11.85 | 5000 | 0.2409 | 0.8958 | 0.8958 |
| 0.2334 | 12.32 | 5200 | 0.2439 | 0.8921 | 0.8921 |
| 0.2364 | 12.8 | 5400 | 0.2496 | 0.8947 | 0.8947 |
| 0.237 | 13.27 | 5600 | 0.2399 | 0.8947 | 0.8947 |
| 0.2288 | 13.74 | 5800 | 0.2400 | 0.8996 | 0.8996 |
| 0.2319 | 14.22 | 6000 | 0.2448 | 0.8934 | 0.8934 |
| 0.2288 | 14.69 | 6200 | 0.2422 | 0.8989 | 0.8989 |
| 0.2313 | 15.17 | 6400 | 0.2411 | 0.8967 | 0.8967 |
| 0.2331 | 15.64 | 6600 | 0.2416 | 0.8970 | 0.8970 |
| 0.2244 | 16.11 | 6800 | 0.2385 | 0.8981 | 0.8981 |
| 0.2233 | 16.59 | 7000 | 0.2361 | 0.8998 | 0.8998 |
| 0.2281 | 17.06 | 7200 | 0.2381 | 0.8978 | 0.8979 |
| 0.2247 | 17.54 | 7400 | 0.2415 | 0.8947 | 0.8947 |
| 0.2268 | 18.01 | 7600 | 0.2353 | 0.8992 | 0.8992 |
| 0.2208 | 18.48 | 7800 | 0.2409 | 0.8981 | 0.8981 |
| 0.2294 | 18.96 | 8000 | 0.2351 | 0.8996 | 0.8996 |
| 0.2237 | 19.43 | 8200 | 0.2390 | 0.8992 | 0.8992 |
| 0.2243 | 19.91 | 8400 | 0.2363 | 0.8982 | 0.8983 |
| 0.2241 | 20.38 | 8600 | 0.2374 | 0.8993 | 0.8993 |
| 0.2202 | 20.85 | 8800 | 0.2361 | 0.8990 | 0.8990 |
| 0.2218 | 21.33 | 9000 | 0.2373 | 0.8992 | 0.8992 |
| 0.2197 | 21.8 | 9200 | 0.2394 | 0.8992 | 0.8992 |
| 0.2263 | 22.27 | 9400 | 0.2361 | 0.8989 | 0.8989 |
| 0.2152 | 22.75 | 9600 | 0.2377 | 0.8990 | 0.8990 |
| 0.2202 | 23.22 | 9800 | 0.2377 | 0.8990 | 0.8990 |
| 0.2206 | 23.7 | 10000 | 0.2374 | 0.8999 | 0.8999 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_mouse_1-seqsight_65536_512_47M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_mouse_1-seqsight_65536_512_47M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_65536_512_47M",
"region:us"
] | null | 2024-05-03T16:48:27+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us
| GUE\_mouse\_1-seqsight\_65536\_512\_47M-L32\_f
==============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_65536\_512\_47M on the mahdibaghbanzadeh/GUE\_mouse\_1 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2380
* F1 Score: 0.8971
* Accuracy: 0.8971
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_mouse_4-seqsight_65536_512_47M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_mouse_4](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_4) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5948
- F1 Score: 0.6707
- Accuracy: 0.6707
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.6521 | 1.69 | 200 | 0.6311 | 0.6340 | 0.6383 |
| 0.627 | 3.39 | 400 | 0.6197 | 0.6487 | 0.6500 |
| 0.6163 | 5.08 | 600 | 0.6114 | 0.6666 | 0.6665 |
| 0.6104 | 6.78 | 800 | 0.6103 | 0.6676 | 0.6676 |
| 0.6062 | 8.47 | 1000 | 0.6073 | 0.6693 | 0.6691 |
| 0.6025 | 10.17 | 1200 | 0.6068 | 0.6739 | 0.6739 |
| 0.5986 | 11.86 | 1400 | 0.6013 | 0.6743 | 0.6745 |
| 0.5957 | 13.56 | 1600 | 0.6004 | 0.6776 | 0.6776 |
| 0.5964 | 15.25 | 1800 | 0.5962 | 0.6822 | 0.6824 |
| 0.5907 | 16.95 | 2000 | 0.5946 | 0.6853 | 0.6856 |
| 0.586 | 18.64 | 2200 | 0.5967 | 0.6780 | 0.6782 |
| 0.5872 | 20.34 | 2400 | 0.5974 | 0.6772 | 0.6803 |
| 0.5874 | 22.03 | 2600 | 0.5923 | 0.6867 | 0.6867 |
| 0.5837 | 23.73 | 2800 | 0.5887 | 0.6937 | 0.6936 |
| 0.584 | 25.42 | 3000 | 0.5901 | 0.6885 | 0.6899 |
| 0.5828 | 27.12 | 3200 | 0.5944 | 0.6771 | 0.6787 |
| 0.5809 | 28.81 | 3400 | 0.5882 | 0.6916 | 0.6914 |
| 0.5754 | 30.51 | 3600 | 0.5876 | 0.6937 | 0.6936 |
| 0.5791 | 32.2 | 3800 | 0.5852 | 0.6979 | 0.6978 |
| 0.5786 | 33.9 | 4000 | 0.5855 | 0.6899 | 0.6899 |
| 0.5769 | 35.59 | 4200 | 0.5867 | 0.6903 | 0.6904 |
| 0.5735 | 37.29 | 4400 | 0.5839 | 0.6935 | 0.6936 |
| 0.5771 | 38.98 | 4600 | 0.5826 | 0.6943 | 0.6946 |
| 0.5735 | 40.68 | 4800 | 0.5815 | 0.6977 | 0.6978 |
| 0.5709 | 42.37 | 5000 | 0.5823 | 0.6947 | 0.6946 |
| 0.5736 | 44.07 | 5200 | 0.5811 | 0.6955 | 0.6957 |
| 0.5703 | 45.76 | 5400 | 0.5821 | 0.6947 | 0.6946 |
| 0.5711 | 47.46 | 5600 | 0.5819 | 0.6945 | 0.6946 |
| 0.5716 | 49.15 | 5800 | 0.5847 | 0.6890 | 0.6899 |
| 0.5688 | 50.85 | 6000 | 0.5803 | 0.7000 | 0.6999 |
| 0.5665 | 52.54 | 6200 | 0.5811 | 0.6990 | 0.6989 |
| 0.5651 | 54.24 | 6400 | 0.5798 | 0.6957 | 0.6957 |
| 0.571 | 55.93 | 6600 | 0.5786 | 0.6968 | 0.6968 |
| 0.5676 | 57.63 | 6800 | 0.5794 | 0.6956 | 0.6962 |
| 0.5645 | 59.32 | 7000 | 0.5808 | 0.6942 | 0.6941 |
| 0.566 | 61.02 | 7200 | 0.5794 | 0.6917 | 0.6925 |
| 0.5642 | 62.71 | 7400 | 0.5783 | 0.6971 | 0.6973 |
| 0.5663 | 64.41 | 7600 | 0.5778 | 0.6978 | 0.6978 |
| 0.5662 | 66.1 | 7800 | 0.5793 | 0.6942 | 0.6941 |
| 0.5628 | 67.8 | 8000 | 0.5785 | 0.6971 | 0.6973 |
| 0.5656 | 69.49 | 8200 | 0.5782 | 0.6979 | 0.6978 |
| 0.5631 | 71.19 | 8400 | 0.5776 | 0.6957 | 0.6957 |
| 0.5639 | 72.88 | 8600 | 0.5774 | 0.6987 | 0.6989 |
| 0.5609 | 74.58 | 8800 | 0.5779 | 0.6958 | 0.6957 |
| 0.5636 | 76.27 | 9000 | 0.5780 | 0.6963 | 0.6962 |
| 0.563 | 77.97 | 9200 | 0.5775 | 0.6974 | 0.6973 |
| 0.5616 | 79.66 | 9400 | 0.5776 | 0.6976 | 0.6978 |
| 0.5619 | 81.36 | 9600 | 0.5775 | 0.6962 | 0.6962 |
| 0.5652 | 83.05 | 9800 | 0.5774 | 0.6972 | 0.6973 |
| 0.5595 | 84.75 | 10000 | 0.5774 | 0.6989 | 0.6989 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_mouse_4-seqsight_65536_512_47M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_mouse_4-seqsight_65536_512_47M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_65536_512_47M",
"region:us"
] | null | 2024-05-03T16:50:33+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us
| GUE\_mouse\_4-seqsight\_65536\_512\_47M-L1\_f
=============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_65536\_512\_47M on the mahdibaghbanzadeh/GUE\_mouse\_4 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5948
* F1 Score: 0.6707
* Accuracy: 0.6707
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text-generation | transformers |
# Uploaded model
- **Developed by:** shkna1368
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) | {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-bnb-4bit", "pipeline_tag": "text-generation"} | shkna1368/kurdish_poetry_v3_model | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"region:us"
] | null | 2024-05-03T16:50:37+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #llama #text-generation #text-generation-inference #unsloth #trl #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #4-bit #region-us
|
# Uploaded model
- Developed by: shkna1368
- License: apache-2.0
- Finetuned from model : unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/> | [
"# Uploaded model\n\n- Developed by: shkna1368\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #text-generation-inference #unsloth #trl #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #4-bit #region-us \n",
"# Uploaded model\n\n- Developed by: shkna1368\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_mouse_4-seqsight_65536_512_47M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_mouse_4](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_4) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5997
- F1 Score: 0.6760
- Accuracy: 0.6760
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.6464 | 1.69 | 200 | 0.6291 | 0.6383 | 0.6431 |
| 0.6205 | 3.39 | 400 | 0.6127 | 0.6541 | 0.6553 |
| 0.6052 | 5.08 | 600 | 0.6008 | 0.6660 | 0.6660 |
| 0.5962 | 6.78 | 800 | 0.5952 | 0.6850 | 0.6851 |
| 0.589 | 8.47 | 1000 | 0.5906 | 0.6884 | 0.6883 |
| 0.5841 | 10.17 | 1200 | 0.5929 | 0.6830 | 0.6835 |
| 0.5809 | 11.86 | 1400 | 0.5859 | 0.6899 | 0.6899 |
| 0.5737 | 13.56 | 1600 | 0.5868 | 0.6973 | 0.6973 |
| 0.5746 | 15.25 | 1800 | 0.5881 | 0.6834 | 0.6856 |
| 0.5679 | 16.95 | 2000 | 0.5914 | 0.6871 | 0.6883 |
| 0.5629 | 18.64 | 2200 | 0.5909 | 0.6867 | 0.6872 |
| 0.5626 | 20.34 | 2400 | 0.5850 | 0.6899 | 0.6904 |
| 0.5611 | 22.03 | 2600 | 0.6049 | 0.6727 | 0.6771 |
| 0.5549 | 23.73 | 2800 | 0.5837 | 0.6957 | 0.6957 |
| 0.5551 | 25.42 | 3000 | 0.5832 | 0.6980 | 0.6989 |
| 0.5532 | 27.12 | 3200 | 0.5874 | 0.6884 | 0.6893 |
| 0.55 | 28.81 | 3400 | 0.5862 | 0.6948 | 0.6946 |
| 0.544 | 30.51 | 3600 | 0.5865 | 0.7006 | 0.7005 |
| 0.5452 | 32.2 | 3800 | 0.5857 | 0.6985 | 0.6984 |
| 0.5445 | 33.9 | 4000 | 0.5852 | 0.6965 | 0.6968 |
| 0.5391 | 35.59 | 4200 | 0.5939 | 0.6867 | 0.6872 |
| 0.5392 | 37.29 | 4400 | 0.5879 | 0.7016 | 0.7015 |
| 0.5392 | 38.98 | 4600 | 0.5866 | 0.7011 | 0.7010 |
| 0.5349 | 40.68 | 4800 | 0.5890 | 0.6990 | 0.6989 |
| 0.5301 | 42.37 | 5000 | 0.5922 | 0.6913 | 0.6914 |
| 0.5352 | 44.07 | 5200 | 0.5859 | 0.7011 | 0.7010 |
| 0.5268 | 45.76 | 5400 | 0.5905 | 0.6947 | 0.6946 |
| 0.5284 | 47.46 | 5600 | 0.5919 | 0.7042 | 0.7042 |
| 0.5294 | 49.15 | 5800 | 0.5930 | 0.6938 | 0.6941 |
| 0.5242 | 50.85 | 6000 | 0.5896 | 0.6899 | 0.6899 |
| 0.5227 | 52.54 | 6200 | 0.5891 | 0.6995 | 0.6994 |
| 0.519 | 54.24 | 6400 | 0.5922 | 0.6995 | 0.6994 |
| 0.5246 | 55.93 | 6600 | 0.5936 | 0.6934 | 0.6936 |
| 0.5201 | 57.63 | 6800 | 0.5891 | 0.6989 | 0.6989 |
| 0.5165 | 59.32 | 7000 | 0.5952 | 0.6956 | 0.6957 |
| 0.5146 | 61.02 | 7200 | 0.5919 | 0.6985 | 0.6984 |
| 0.5153 | 62.71 | 7400 | 0.5909 | 0.6995 | 0.6994 |
| 0.5157 | 64.41 | 7600 | 0.5900 | 0.6995 | 0.6994 |
| 0.5143 | 66.1 | 7800 | 0.5983 | 0.7005 | 0.7005 |
| 0.5122 | 67.8 | 8000 | 0.5958 | 0.6994 | 0.6994 |
| 0.5115 | 69.49 | 8200 | 0.5938 | 0.7016 | 0.7015 |
| 0.5125 | 71.19 | 8400 | 0.5931 | 0.6948 | 0.6946 |
| 0.5132 | 72.88 | 8600 | 0.5940 | 0.6995 | 0.6994 |
| 0.5102 | 74.58 | 8800 | 0.5946 | 0.6942 | 0.6941 |
| 0.5131 | 76.27 | 9000 | 0.5943 | 0.6963 | 0.6962 |
| 0.5078 | 77.97 | 9200 | 0.5952 | 0.6942 | 0.6941 |
| 0.5101 | 79.66 | 9400 | 0.5941 | 0.6969 | 0.6968 |
| 0.5088 | 81.36 | 9600 | 0.5944 | 0.6942 | 0.6941 |
| 0.5098 | 83.05 | 9800 | 0.5942 | 0.6937 | 0.6936 |
| 0.505 | 84.75 | 10000 | 0.5946 | 0.6937 | 0.6936 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_mouse_4-seqsight_65536_512_47M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_mouse_4-seqsight_65536_512_47M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_65536_512_47M",
"region:us"
] | null | 2024-05-03T16:50:56+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us
| GUE\_mouse\_4-seqsight\_65536\_512\_47M-L8\_f
=============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_65536\_512\_47M on the mahdibaghbanzadeh/GUE\_mouse\_4 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5997
* F1 Score: 0.6760
* Accuracy: 0.6760
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_mouse_4-seqsight_65536_512_47M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_mouse_4](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_4) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5934
- F1 Score: 0.6669
- Accuracy: 0.6670
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.6385 | 1.69 | 200 | 0.6138 | 0.6592 | 0.6591 |
| 0.6104 | 3.39 | 400 | 0.6019 | 0.6744 | 0.6745 |
| 0.5929 | 5.08 | 600 | 0.5917 | 0.6831 | 0.6830 |
| 0.5813 | 6.78 | 800 | 0.5865 | 0.6924 | 0.6930 |
| 0.5717 | 8.47 | 1000 | 0.5868 | 0.6908 | 0.6914 |
| 0.5641 | 10.17 | 1200 | 0.5854 | 0.6930 | 0.6936 |
| 0.5553 | 11.86 | 1400 | 0.5774 | 0.7075 | 0.7074 |
| 0.5443 | 13.56 | 1600 | 0.5859 | 0.6988 | 0.6989 |
| 0.5406 | 15.25 | 1800 | 0.5799 | 0.7015 | 0.7021 |
| 0.5298 | 16.95 | 2000 | 0.5854 | 0.6908 | 0.6920 |
| 0.5214 | 18.64 | 2200 | 0.5981 | 0.6859 | 0.6867 |
| 0.5138 | 20.34 | 2400 | 0.5922 | 0.6974 | 0.6973 |
| 0.5109 | 22.03 | 2600 | 0.6084 | 0.6798 | 0.6824 |
| 0.4997 | 23.73 | 2800 | 0.5918 | 0.6984 | 0.6984 |
| 0.4936 | 25.42 | 3000 | 0.6058 | 0.6896 | 0.6941 |
| 0.4897 | 27.12 | 3200 | 0.6117 | 0.6988 | 0.6989 |
| 0.4819 | 28.81 | 3400 | 0.6112 | 0.7056 | 0.7058 |
| 0.4692 | 30.51 | 3600 | 0.6153 | 0.7043 | 0.7042 |
| 0.4657 | 32.2 | 3800 | 0.6426 | 0.6894 | 0.6893 |
| 0.4632 | 33.9 | 4000 | 0.6184 | 0.6939 | 0.6941 |
| 0.4527 | 35.59 | 4200 | 0.6472 | 0.6833 | 0.6840 |
| 0.4502 | 37.29 | 4400 | 0.6197 | 0.7027 | 0.7026 |
| 0.448 | 38.98 | 4600 | 0.6403 | 0.6931 | 0.6930 |
| 0.4379 | 40.68 | 4800 | 0.6416 | 0.6958 | 0.6957 |
| 0.4331 | 42.37 | 5000 | 0.6411 | 0.6887 | 0.6888 |
| 0.4327 | 44.07 | 5200 | 0.6587 | 0.6921 | 0.6920 |
| 0.4206 | 45.76 | 5400 | 0.6642 | 0.6921 | 0.6920 |
| 0.4175 | 47.46 | 5600 | 0.6771 | 0.6971 | 0.6973 |
| 0.4181 | 49.15 | 5800 | 0.6664 | 0.6952 | 0.6952 |
| 0.4105 | 50.85 | 6000 | 0.6591 | 0.6831 | 0.6830 |
| 0.4053 | 52.54 | 6200 | 0.6680 | 0.6921 | 0.6920 |
| 0.4009 | 54.24 | 6400 | 0.6803 | 0.6836 | 0.6835 |
| 0.3975 | 55.93 | 6600 | 0.6966 | 0.6871 | 0.6872 |
| 0.3969 | 57.63 | 6800 | 0.6871 | 0.6979 | 0.6978 |
| 0.3936 | 59.32 | 7000 | 0.7074 | 0.6852 | 0.6851 |
| 0.3866 | 61.02 | 7200 | 0.7011 | 0.6894 | 0.6893 |
| 0.3839 | 62.71 | 7400 | 0.6931 | 0.6868 | 0.6867 |
| 0.3815 | 64.41 | 7600 | 0.6938 | 0.6878 | 0.6877 |
| 0.3823 | 66.1 | 7800 | 0.7002 | 0.6857 | 0.6856 |
| 0.3772 | 67.8 | 8000 | 0.7163 | 0.6889 | 0.6888 |
| 0.3756 | 69.49 | 8200 | 0.7114 | 0.6915 | 0.6914 |
| 0.3761 | 71.19 | 8400 | 0.7144 | 0.6909 | 0.6909 |
| 0.3735 | 72.88 | 8600 | 0.7128 | 0.6899 | 0.6899 |
| 0.3722 | 74.58 | 8800 | 0.7161 | 0.6910 | 0.6909 |
| 0.3731 | 76.27 | 9000 | 0.7225 | 0.6883 | 0.6883 |
| 0.3656 | 77.97 | 9200 | 0.7261 | 0.6889 | 0.6888 |
| 0.3672 | 79.66 | 9400 | 0.7293 | 0.6921 | 0.6920 |
| 0.3749 | 81.36 | 9600 | 0.7191 | 0.6877 | 0.6877 |
| 0.3646 | 83.05 | 9800 | 0.7229 | 0.6910 | 0.6909 |
| 0.3638 | 84.75 | 10000 | 0.7225 | 0.6905 | 0.6904 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_mouse_4-seqsight_65536_512_47M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_mouse_4-seqsight_65536_512_47M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_65536_512_47M",
"region:us"
] | null | 2024-05-03T16:51:16+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us
| GUE\_mouse\_4-seqsight\_65536\_512\_47M-L32\_f
==============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_65536\_512\_47M on the mahdibaghbanzadeh/GUE\_mouse\_4 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5934
* F1 Score: 0.6669
* Accuracy: 0.6670
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_mouse_3-seqsight_65536_512_47M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_mouse_3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_3) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4966
- F1 Score: 0.7905
- Accuracy: 0.7908
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.6153 | 13.33 | 200 | 0.5197 | 0.7270 | 0.7280 |
| 0.5296 | 26.67 | 400 | 0.4688 | 0.7612 | 0.7615 |
| 0.4753 | 40.0 | 600 | 0.4453 | 0.7862 | 0.7866 |
| 0.4354 | 53.33 | 800 | 0.4455 | 0.7824 | 0.7824 |
| 0.4069 | 66.67 | 1000 | 0.4548 | 0.7822 | 0.7824 |
| 0.3871 | 80.0 | 1200 | 0.4627 | 0.7824 | 0.7824 |
| 0.3672 | 93.33 | 1400 | 0.4847 | 0.7822 | 0.7824 |
| 0.3483 | 106.67 | 1600 | 0.4773 | 0.7822 | 0.7824 |
| 0.3364 | 120.0 | 1800 | 0.4735 | 0.7899 | 0.7908 |
| 0.3245 | 133.33 | 2000 | 0.4826 | 0.7699 | 0.7699 |
| 0.3147 | 146.67 | 2200 | 0.4741 | 0.7772 | 0.7782 |
| 0.3043 | 160.0 | 2400 | 0.4895 | 0.7944 | 0.7950 |
| 0.2964 | 173.33 | 2600 | 0.4888 | 0.7907 | 0.7908 |
| 0.2902 | 186.67 | 2800 | 0.4801 | 0.8115 | 0.8117 |
| 0.2778 | 200.0 | 3000 | 0.4895 | 0.7991 | 0.7992 |
| 0.2702 | 213.33 | 3200 | 0.4908 | 0.7908 | 0.7908 |
| 0.2638 | 226.67 | 3400 | 0.5043 | 0.8114 | 0.8117 |
| 0.2601 | 240.0 | 3600 | 0.5133 | 0.8117 | 0.8117 |
| 0.2565 | 253.33 | 3800 | 0.5242 | 0.7865 | 0.7866 |
| 0.2513 | 266.67 | 4000 | 0.5249 | 0.8033 | 0.8033 |
| 0.2463 | 280.0 | 4200 | 0.5159 | 0.8033 | 0.8033 |
| 0.2422 | 293.33 | 4400 | 0.5105 | 0.8159 | 0.8159 |
| 0.2422 | 306.67 | 4600 | 0.5276 | 0.8033 | 0.8033 |
| 0.2378 | 320.0 | 4800 | 0.5143 | 0.8201 | 0.8201 |
| 0.2339 | 333.33 | 5000 | 0.5301 | 0.7991 | 0.7992 |
| 0.2291 | 346.67 | 5200 | 0.5187 | 0.8159 | 0.8159 |
| 0.2262 | 360.0 | 5400 | 0.5386 | 0.8033 | 0.8033 |
| 0.2286 | 373.33 | 5600 | 0.5374 | 0.8033 | 0.8033 |
| 0.2224 | 386.67 | 5800 | 0.5370 | 0.8032 | 0.8033 |
| 0.2187 | 400.0 | 6000 | 0.5439 | 0.7991 | 0.7992 |
| 0.2199 | 413.33 | 6200 | 0.5510 | 0.7992 | 0.7992 |
| 0.215 | 426.67 | 6400 | 0.5624 | 0.8117 | 0.8117 |
| 0.2085 | 440.0 | 6600 | 0.5570 | 0.8033 | 0.8033 |
| 0.2148 | 453.33 | 6800 | 0.5593 | 0.7948 | 0.7950 |
| 0.2102 | 466.67 | 7000 | 0.5556 | 0.8033 | 0.8033 |
| 0.2079 | 480.0 | 7200 | 0.5660 | 0.8159 | 0.8159 |
| 0.2083 | 493.33 | 7400 | 0.5608 | 0.7991 | 0.7992 |
| 0.2029 | 506.67 | 7600 | 0.5672 | 0.8033 | 0.8033 |
| 0.2086 | 520.0 | 7800 | 0.5573 | 0.8033 | 0.8033 |
| 0.2045 | 533.33 | 8000 | 0.5656 | 0.7992 | 0.7992 |
| 0.2064 | 546.67 | 8200 | 0.5623 | 0.7950 | 0.7950 |
| 0.2037 | 560.0 | 8400 | 0.5711 | 0.8033 | 0.8033 |
| 0.2058 | 573.33 | 8600 | 0.5637 | 0.8033 | 0.8033 |
| 0.199 | 586.67 | 8800 | 0.5728 | 0.7992 | 0.7992 |
| 0.1986 | 600.0 | 9000 | 0.5716 | 0.8033 | 0.8033 |
| 0.199 | 613.33 | 9200 | 0.5719 | 0.8075 | 0.8075 |
| 0.2011 | 626.67 | 9400 | 0.5749 | 0.8033 | 0.8033 |
| 0.1941 | 640.0 | 9600 | 0.5781 | 0.8033 | 0.8033 |
| 0.196 | 653.33 | 9800 | 0.5787 | 0.8033 | 0.8033 |
| 0.1975 | 666.67 | 10000 | 0.5777 | 0.8033 | 0.8033 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_mouse_3-seqsight_65536_512_47M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_mouse_3-seqsight_65536_512_47M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_65536_512_47M",
"region:us"
] | null | 2024-05-03T16:51:22+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us
| GUE\_mouse\_3-seqsight\_65536\_512\_47M-L1\_f
=============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_65536\_512\_47M on the mahdibaghbanzadeh/GUE\_mouse\_3 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4966
* F1 Score: 0.7905
* Accuracy: 0.7908
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | golf2248/wf4ax6j | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-03T16:52:05+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_mouse_3-seqsight_65536_512_47M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_mouse_3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_3) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7229
- F1 Score: 0.7936
- Accuracy: 0.7950
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.5751 | 13.33 | 200 | 0.4346 | 0.7943 | 0.7950 |
| 0.4162 | 26.67 | 400 | 0.4310 | 0.8108 | 0.8117 |
| 0.3344 | 40.0 | 600 | 0.4316 | 0.8201 | 0.8201 |
| 0.2898 | 53.33 | 800 | 0.4517 | 0.7991 | 0.7992 |
| 0.2598 | 66.67 | 1000 | 0.4565 | 0.8284 | 0.8285 |
| 0.237 | 80.0 | 1200 | 0.4747 | 0.8324 | 0.8326 |
| 0.2109 | 93.33 | 1400 | 0.5427 | 0.7943 | 0.7950 |
| 0.1887 | 106.67 | 1600 | 0.5072 | 0.8493 | 0.8494 |
| 0.1776 | 120.0 | 1800 | 0.5411 | 0.8368 | 0.8368 |
| 0.1623 | 133.33 | 2000 | 0.5828 | 0.8366 | 0.8368 |
| 0.1505 | 146.67 | 2200 | 0.5813 | 0.8532 | 0.8536 |
| 0.137 | 160.0 | 2400 | 0.6017 | 0.8366 | 0.8368 |
| 0.13 | 173.33 | 2600 | 0.6122 | 0.8243 | 0.8243 |
| 0.1221 | 186.67 | 2800 | 0.6114 | 0.8326 | 0.8326 |
| 0.1103 | 200.0 | 3000 | 0.6693 | 0.8117 | 0.8117 |
| 0.1076 | 213.33 | 3200 | 0.6581 | 0.8285 | 0.8285 |
| 0.102 | 226.67 | 3400 | 0.6523 | 0.8490 | 0.8494 |
| 0.0938 | 240.0 | 3600 | 0.6944 | 0.8493 | 0.8494 |
| 0.0883 | 253.33 | 3800 | 0.7203 | 0.8284 | 0.8285 |
| 0.0868 | 266.67 | 4000 | 0.7252 | 0.8367 | 0.8368 |
| 0.0828 | 280.0 | 4200 | 0.7367 | 0.8368 | 0.8368 |
| 0.0758 | 293.33 | 4400 | 0.7482 | 0.8326 | 0.8326 |
| 0.073 | 306.67 | 4600 | 0.7660 | 0.8532 | 0.8536 |
| 0.0744 | 320.0 | 4800 | 0.7260 | 0.8452 | 0.8452 |
| 0.0686 | 333.33 | 5000 | 0.8126 | 0.8030 | 0.8033 |
| 0.0657 | 346.67 | 5200 | 0.8016 | 0.8033 | 0.8033 |
| 0.0612 | 360.0 | 5400 | 0.7908 | 0.8450 | 0.8452 |
| 0.0595 | 373.33 | 5600 | 0.7927 | 0.8325 | 0.8326 |
| 0.0602 | 386.67 | 5800 | 0.8100 | 0.8405 | 0.8410 |
| 0.0534 | 400.0 | 6000 | 0.8413 | 0.8284 | 0.8285 |
| 0.0572 | 413.33 | 6200 | 0.8071 | 0.8201 | 0.8201 |
| 0.054 | 426.67 | 6400 | 0.8397 | 0.8368 | 0.8368 |
| 0.051 | 440.0 | 6600 | 0.8219 | 0.8491 | 0.8494 |
| 0.0486 | 453.33 | 6800 | 0.8881 | 0.8159 | 0.8159 |
| 0.0468 | 466.67 | 7000 | 0.8793 | 0.8283 | 0.8285 |
| 0.0486 | 480.0 | 7200 | 0.8410 | 0.8365 | 0.8368 |
| 0.0448 | 493.33 | 7400 | 0.8617 | 0.8282 | 0.8285 |
| 0.0439 | 506.67 | 7600 | 0.8704 | 0.8284 | 0.8285 |
| 0.0465 | 520.0 | 7800 | 0.8496 | 0.8200 | 0.8201 |
| 0.0459 | 533.33 | 8000 | 0.8654 | 0.8159 | 0.8159 |
| 0.043 | 546.67 | 8200 | 0.8749 | 0.8325 | 0.8326 |
| 0.0427 | 560.0 | 8400 | 0.8373 | 0.8285 | 0.8285 |
| 0.0411 | 573.33 | 8600 | 0.8710 | 0.8366 | 0.8368 |
| 0.0417 | 586.67 | 8800 | 0.8645 | 0.8200 | 0.8201 |
| 0.0415 | 600.0 | 9000 | 0.8574 | 0.8282 | 0.8285 |
| 0.0418 | 613.33 | 9200 | 0.8599 | 0.8241 | 0.8243 |
| 0.0388 | 626.67 | 9400 | 0.8878 | 0.8200 | 0.8201 |
| 0.0387 | 640.0 | 9600 | 0.8772 | 0.8325 | 0.8326 |
| 0.0411 | 653.33 | 9800 | 0.8791 | 0.8283 | 0.8285 |
| 0.0372 | 666.67 | 10000 | 0.8800 | 0.8241 | 0.8243 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_mouse_3-seqsight_65536_512_47M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_mouse_3-seqsight_65536_512_47M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_65536_512_47M",
"region:us"
] | null | 2024-05-03T16:52:15+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us
| GUE\_mouse\_3-seqsight\_65536\_512\_47M-L8\_f
=============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_65536\_512\_47M on the mahdibaghbanzadeh/GUE\_mouse\_3 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.7229
* F1 Score: 0.7936
* Accuracy: 0.7950
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_mouse_3-seqsight_65536_512_47M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_mouse_3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_3) dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0857
- F1 Score: 0.8073
- Accuracy: 0.8075
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.5234 | 13.33 | 200 | 0.3913 | 0.8242 | 0.8243 |
| 0.3311 | 26.67 | 400 | 0.4285 | 0.8272 | 0.8285 |
| 0.2514 | 40.0 | 600 | 0.4827 | 0.8033 | 0.8033 |
| 0.1983 | 53.33 | 800 | 0.4903 | 0.8492 | 0.8494 |
| 0.1525 | 66.67 | 1000 | 0.5545 | 0.8201 | 0.8201 |
| 0.1297 | 80.0 | 1200 | 0.5400 | 0.8324 | 0.8326 |
| 0.1067 | 93.33 | 1400 | 0.6817 | 0.7782 | 0.7782 |
| 0.0876 | 106.67 | 1600 | 0.6357 | 0.8159 | 0.8159 |
| 0.076 | 120.0 | 1800 | 0.6809 | 0.8200 | 0.8201 |
| 0.0666 | 133.33 | 2000 | 0.7299 | 0.8283 | 0.8285 |
| 0.0577 | 146.67 | 2200 | 0.6639 | 0.8452 | 0.8452 |
| 0.0527 | 160.0 | 2400 | 0.7078 | 0.8326 | 0.8326 |
| 0.044 | 173.33 | 2600 | 0.7728 | 0.8243 | 0.8243 |
| 0.0459 | 186.67 | 2800 | 0.7674 | 0.8357 | 0.8368 |
| 0.0383 | 200.0 | 3000 | 0.8229 | 0.8408 | 0.8410 |
| 0.037 | 213.33 | 3200 | 0.7486 | 0.8619 | 0.8619 |
| 0.031 | 226.67 | 3400 | 0.8314 | 0.8535 | 0.8536 |
| 0.0292 | 240.0 | 3600 | 0.7943 | 0.8451 | 0.8452 |
| 0.0234 | 253.33 | 3800 | 0.9168 | 0.8452 | 0.8452 |
| 0.0245 | 266.67 | 4000 | 0.8986 | 0.8368 | 0.8368 |
| 0.025 | 280.0 | 4200 | 0.9041 | 0.8326 | 0.8326 |
| 0.0236 | 293.33 | 4400 | 0.8131 | 0.8494 | 0.8494 |
| 0.0201 | 306.67 | 4600 | 0.9812 | 0.8367 | 0.8368 |
| 0.0235 | 320.0 | 4800 | 0.9153 | 0.8452 | 0.8452 |
| 0.019 | 333.33 | 5000 | 0.9622 | 0.8155 | 0.8159 |
| 0.0181 | 346.67 | 5200 | 0.9807 | 0.8117 | 0.8117 |
| 0.0176 | 360.0 | 5400 | 0.9316 | 0.8325 | 0.8326 |
| 0.0172 | 373.33 | 5600 | 0.9852 | 0.8284 | 0.8285 |
| 0.0157 | 386.67 | 5800 | 0.9615 | 0.8408 | 0.8410 |
| 0.0158 | 400.0 | 6000 | 0.9269 | 0.8284 | 0.8285 |
| 0.0141 | 413.33 | 6200 | 0.9634 | 0.8284 | 0.8285 |
| 0.0159 | 426.67 | 6400 | 1.0444 | 0.8284 | 0.8285 |
| 0.0113 | 440.0 | 6600 | 1.0204 | 0.8367 | 0.8368 |
| 0.0138 | 453.33 | 6800 | 1.0301 | 0.8201 | 0.8201 |
| 0.0132 | 466.67 | 7000 | 0.9787 | 0.8409 | 0.8410 |
| 0.0114 | 480.0 | 7200 | 0.9992 | 0.8325 | 0.8326 |
| 0.012 | 493.33 | 7400 | 1.0057 | 0.8451 | 0.8452 |
| 0.011 | 506.67 | 7600 | 1.0578 | 0.8284 | 0.8285 |
| 0.0115 | 520.0 | 7800 | 1.0444 | 0.8158 | 0.8159 |
| 0.0105 | 533.33 | 8000 | 1.0361 | 0.8408 | 0.8410 |
| 0.0105 | 546.67 | 8200 | 1.0373 | 0.8283 | 0.8285 |
| 0.0097 | 560.0 | 8400 | 1.0294 | 0.8284 | 0.8285 |
| 0.0086 | 573.33 | 8600 | 1.0487 | 0.8368 | 0.8368 |
| 0.0094 | 586.67 | 8800 | 1.1034 | 0.8325 | 0.8326 |
| 0.0088 | 600.0 | 9000 | 1.0760 | 0.8408 | 0.8410 |
| 0.0099 | 613.33 | 9200 | 1.0482 | 0.8325 | 0.8326 |
| 0.0085 | 626.67 | 9400 | 1.0606 | 0.8367 | 0.8368 |
| 0.0084 | 640.0 | 9600 | 1.0605 | 0.8326 | 0.8326 |
| 0.0085 | 653.33 | 9800 | 1.0749 | 0.8367 | 0.8368 |
| 0.0074 | 666.67 | 10000 | 1.0751 | 0.8367 | 0.8368 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_mouse_3-seqsight_65536_512_47M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_mouse_3-seqsight_65536_512_47M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_65536_512_47M",
"region:us"
] | null | 2024-05-03T16:52:16+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us
| GUE\_mouse\_3-seqsight\_65536\_512\_47M-L32\_f
==============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_65536\_512\_47M on the mahdibaghbanzadeh/GUE\_mouse\_3 dataset.
It achieves the following results on the evaluation set:
* Loss: 1.0857
* F1 Score: 0.8073
* Accuracy: 0.8075
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_mouse_2-seqsight_65536_512_47M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_mouse_2](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_2) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3541
- F1 Score: 0.8658
- Accuracy: 0.8659
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.4313 | 9.52 | 200 | 0.3500 | 0.8142 | 0.8171 |
| 0.3241 | 19.05 | 400 | 0.3304 | 0.8337 | 0.8354 |
| 0.2983 | 28.57 | 600 | 0.2979 | 0.8474 | 0.8476 |
| 0.2808 | 38.1 | 800 | 0.2914 | 0.8656 | 0.8659 |
| 0.2646 | 47.62 | 1000 | 0.2881 | 0.8658 | 0.8659 |
| 0.256 | 57.14 | 1200 | 0.2981 | 0.8562 | 0.8567 |
| 0.2392 | 66.67 | 1400 | 0.3045 | 0.8685 | 0.8689 |
| 0.2375 | 76.19 | 1600 | 0.3021 | 0.8713 | 0.8720 |
| 0.2255 | 85.71 | 1800 | 0.2869 | 0.8747 | 0.875 |
| 0.2177 | 95.24 | 2000 | 0.2744 | 0.8689 | 0.8689 |
| 0.2113 | 104.76 | 2200 | 0.2641 | 0.8780 | 0.8780 |
| 0.2051 | 114.29 | 2400 | 0.2741 | 0.8811 | 0.8811 |
| 0.2064 | 123.81 | 2600 | 0.2673 | 0.8841 | 0.8841 |
| 0.1955 | 133.33 | 2800 | 0.2755 | 0.8779 | 0.8780 |
| 0.1906 | 142.86 | 3000 | 0.2868 | 0.8717 | 0.8720 |
| 0.1845 | 152.38 | 3200 | 0.2780 | 0.8749 | 0.875 |
| 0.1824 | 161.9 | 3400 | 0.3034 | 0.8716 | 0.8720 |
| 0.1783 | 171.43 | 3600 | 0.2952 | 0.8747 | 0.875 |
| 0.1771 | 180.95 | 3800 | 0.2867 | 0.8719 | 0.8720 |
| 0.1721 | 190.48 | 4000 | 0.2793 | 0.8780 | 0.8780 |
| 0.1691 | 200.0 | 4200 | 0.3039 | 0.8746 | 0.875 |
| 0.1671 | 209.52 | 4400 | 0.2854 | 0.8841 | 0.8841 |
| 0.1618 | 219.05 | 4600 | 0.2955 | 0.8718 | 0.8720 |
| 0.1632 | 228.57 | 4800 | 0.2898 | 0.8811 | 0.8811 |
| 0.158 | 238.1 | 5000 | 0.3040 | 0.8748 | 0.875 |
| 0.1574 | 247.62 | 5200 | 0.3039 | 0.8749 | 0.875 |
| 0.158 | 257.14 | 5400 | 0.3062 | 0.8749 | 0.875 |
| 0.1516 | 266.67 | 5600 | 0.3205 | 0.8809 | 0.8811 |
| 0.1522 | 276.19 | 5800 | 0.3115 | 0.8748 | 0.875 |
| 0.1493 | 285.71 | 6000 | 0.3113 | 0.8841 | 0.8841 |
| 0.1447 | 295.24 | 6200 | 0.3163 | 0.8810 | 0.8811 |
| 0.1432 | 304.76 | 6400 | 0.3184 | 0.8780 | 0.8780 |
| 0.1455 | 314.29 | 6600 | 0.3097 | 0.8811 | 0.8811 |
| 0.1432 | 323.81 | 6800 | 0.3162 | 0.8841 | 0.8841 |
| 0.1436 | 333.33 | 7000 | 0.3118 | 0.8841 | 0.8841 |
| 0.1408 | 342.86 | 7200 | 0.3115 | 0.8872 | 0.8872 |
| 0.1448 | 352.38 | 7400 | 0.3119 | 0.8872 | 0.8872 |
| 0.1407 | 361.9 | 7600 | 0.3106 | 0.8810 | 0.8811 |
| 0.1393 | 371.43 | 7800 | 0.3156 | 0.8841 | 0.8841 |
| 0.135 | 380.95 | 8000 | 0.3190 | 0.8811 | 0.8811 |
| 0.1352 | 390.48 | 8200 | 0.3213 | 0.8780 | 0.8780 |
| 0.1354 | 400.0 | 8400 | 0.3208 | 0.8872 | 0.8872 |
| 0.1345 | 409.52 | 8600 | 0.3207 | 0.8872 | 0.8872 |
| 0.1381 | 419.05 | 8800 | 0.3193 | 0.8810 | 0.8811 |
| 0.1323 | 428.57 | 9000 | 0.3282 | 0.8841 | 0.8841 |
| 0.1319 | 438.1 | 9200 | 0.3342 | 0.8780 | 0.8780 |
| 0.1339 | 447.62 | 9400 | 0.3331 | 0.8810 | 0.8811 |
| 0.132 | 457.14 | 9600 | 0.3261 | 0.8872 | 0.8872 |
| 0.1377 | 466.67 | 9800 | 0.3275 | 0.8872 | 0.8872 |
| 0.1341 | 476.19 | 10000 | 0.3258 | 0.8872 | 0.8872 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_mouse_2-seqsight_65536_512_47M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_mouse_2-seqsight_65536_512_47M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_65536_512_47M",
"region:us"
] | null | 2024-05-03T16:52:36+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us
| GUE\_mouse\_2-seqsight\_65536\_512\_47M-L1\_f
=============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_65536\_512\_47M on the mahdibaghbanzadeh/GUE\_mouse\_2 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3541
* F1 Score: 0.8658
* Accuracy: 0.8659
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
OpenCerebrum-1.0-7b-SFT - bnb 8bits
- Model creator: https://huggingface.co/Locutusque/
- Original model: https://huggingface.co/Locutusque/OpenCerebrum-1.0-7b-SFT/
Original model description:
---
language:
- en
license: apache-2.0
tags:
- open-source
- code
- math
- chemistry
- biology
- text-generation
- question-answering
datasets:
- Open-Orca/SlimOrca
- glaiveai/glaive-code-assistant
- camel-ai/physics
- camel-ai/math
- camel-ai/chemistry
- camel-ai/biology
- WizardLM/WizardLM_evol_instruct_V2_196k
- microsoft/orca-math-word-problems-200k
- grimulkan/theory-of-mind
- Vezora/Tested-22k-Python-Alpaca
- m-a-p/Code-Feedback
- Locutusque/arc-cot
- jondurbin/airoboros-2.1
- WizardLM/WizardLM_evol_instruct_70k
pipeline_tag: text-generation
---
# OpenCerebrum-1.0-7B-SFT
OpenCerebrum-1.0-7B-SFT is an open-source language model fine-tuned from the alpindale/Mistral-7B-v0.2-hf base model on a diverse dataset aimed at replicating capabilities of AetherResearch's proprietary Cerebrum model.
The model was fine-tuned on approximately 1.2 million examples across 14 datasets spanning coding, math, science, reasoning, and general instruction-following. The goal was to assemble public datasets that could help the model achieve strong performance on benchmarks where Cerebrum excels.
## Model Details
- **Base Model:** alpindale/Mistral-7B-v0.2-hf
- **Parameters:** 7 billion
- **Fine-Tuning Dataset Size:** ~1,200,000 examples
- **Fine-Tuning Data:** Amalgamation of 14 public datasets
- **Language:** English
- **License:** Apache 2.0
## Intended Use
OpenCerebrum-1.0-7B-SFT is intended to be a powerful open-source model for coding, math, science, and general question-answering and text generation tasks. Its diverse fine-tuning data aims to equip it with broad knowledge and reasoning capabilities.
However, as an open-source replica trained on a subset of data compared to the original Cerebrum, it may not match Cerebrum's full performance. Additionally, biases and limitations of the fine-tuning data may be reflected in the model's outputs.
## Limitations and Biases
- The model may have biases and limitations inherited from its fine-tuning datasets. Thorough testing is needed to characterize these.
- With 1.2 million training examples, the fine-tuning data is still limited compared to the proprietary Cerebrum data.
- As the model is based on a 7B parameter model, it has computational and memory constraints compared to larger models.
## Training Details
The model was fine-tuned on the 14 datasets listed in the Datasets section, totaling approximately 1.2 million examples. Default training hyperparameters were used. In the future, the fine-tuning dataset may be condensed to more closely match the 5,000 example dataset reputedly used for the original Cerebrum model.
| {} | RichardErkhov/Locutusque_-_OpenCerebrum-1.0-7b-SFT-8bits | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"8-bit",
"region:us"
] | null | 2024-05-03T16:52:46+00:00 | [] | [] | TAGS
#transformers #safetensors #mistral #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us
| Quantization made by Richard Erkhov.
Github
Discord
Request more models
OpenCerebrum-1.0-7b-SFT - bnb 8bits
- Model creator: URL
- Original model: URL
Original model description:
---
language:
- en
license: apache-2.0
tags:
- open-source
- code
- math
- chemistry
- biology
- text-generation
- question-answering
datasets:
- Open-Orca/SlimOrca
- glaiveai/glaive-code-assistant
- camel-ai/physics
- camel-ai/math
- camel-ai/chemistry
- camel-ai/biology
- WizardLM/WizardLM_evol_instruct_V2_196k
- microsoft/orca-math-word-problems-200k
- grimulkan/theory-of-mind
- Vezora/Tested-22k-Python-Alpaca
- m-a-p/Code-Feedback
- Locutusque/arc-cot
- jondurbin/airoboros-2.1
- WizardLM/WizardLM_evol_instruct_70k
pipeline_tag: text-generation
---
# OpenCerebrum-1.0-7B-SFT
OpenCerebrum-1.0-7B-SFT is an open-source language model fine-tuned from the alpindale/Mistral-7B-v0.2-hf base model on a diverse dataset aimed at replicating capabilities of AetherResearch's proprietary Cerebrum model.
The model was fine-tuned on approximately 1.2 million examples across 14 datasets spanning coding, math, science, reasoning, and general instruction-following. The goal was to assemble public datasets that could help the model achieve strong performance on benchmarks where Cerebrum excels.
## Model Details
- Base Model: alpindale/Mistral-7B-v0.2-hf
- Parameters: 7 billion
- Fine-Tuning Dataset Size: ~1,200,000 examples
- Fine-Tuning Data: Amalgamation of 14 public datasets
- Language: English
- License: Apache 2.0
## Intended Use
OpenCerebrum-1.0-7B-SFT is intended to be a powerful open-source model for coding, math, science, and general question-answering and text generation tasks. Its diverse fine-tuning data aims to equip it with broad knowledge and reasoning capabilities.
However, as an open-source replica trained on a subset of data compared to the original Cerebrum, it may not match Cerebrum's full performance. Additionally, biases and limitations of the fine-tuning data may be reflected in the model's outputs.
## Limitations and Biases
- The model may have biases and limitations inherited from its fine-tuning datasets. Thorough testing is needed to characterize these.
- With 1.2 million training examples, the fine-tuning data is still limited compared to the proprietary Cerebrum data.
- As the model is based on a 7B parameter model, it has computational and memory constraints compared to larger models.
## Training Details
The model was fine-tuned on the 14 datasets listed in the Datasets section, totaling approximately 1.2 million examples. Default training hyperparameters were used. In the future, the fine-tuning dataset may be condensed to more closely match the 5,000 example dataset reputedly used for the original Cerebrum model.
| [
"# OpenCerebrum-1.0-7B-SFT\n\nOpenCerebrum-1.0-7B-SFT is an open-source language model fine-tuned from the alpindale/Mistral-7B-v0.2-hf base model on a diverse dataset aimed at replicating capabilities of AetherResearch's proprietary Cerebrum model. \n\nThe model was fine-tuned on approximately 1.2 million examples across 14 datasets spanning coding, math, science, reasoning, and general instruction-following. The goal was to assemble public datasets that could help the model achieve strong performance on benchmarks where Cerebrum excels.",
"## Model Details\n\n- Base Model: alpindale/Mistral-7B-v0.2-hf\n- Parameters: 7 billion \n- Fine-Tuning Dataset Size: ~1,200,000 examples\n- Fine-Tuning Data: Amalgamation of 14 public datasets\n- Language: English\n- License: Apache 2.0",
"## Intended Use\n\nOpenCerebrum-1.0-7B-SFT is intended to be a powerful open-source model for coding, math, science, and general question-answering and text generation tasks. Its diverse fine-tuning data aims to equip it with broad knowledge and reasoning capabilities.\n\nHowever, as an open-source replica trained on a subset of data compared to the original Cerebrum, it may not match Cerebrum's full performance. Additionally, biases and limitations of the fine-tuning data may be reflected in the model's outputs.",
"## Limitations and Biases\n\n- The model may have biases and limitations inherited from its fine-tuning datasets. Thorough testing is needed to characterize these.\n- With 1.2 million training examples, the fine-tuning data is still limited compared to the proprietary Cerebrum data.\n- As the model is based on a 7B parameter model, it has computational and memory constraints compared to larger models.",
"## Training Details\n\nThe model was fine-tuned on the 14 datasets listed in the Datasets section, totaling approximately 1.2 million examples. Default training hyperparameters were used. In the future, the fine-tuning dataset may be condensed to more closely match the 5,000 example dataset reputedly used for the original Cerebrum model."
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us \n",
"# OpenCerebrum-1.0-7B-SFT\n\nOpenCerebrum-1.0-7B-SFT is an open-source language model fine-tuned from the alpindale/Mistral-7B-v0.2-hf base model on a diverse dataset aimed at replicating capabilities of AetherResearch's proprietary Cerebrum model. \n\nThe model was fine-tuned on approximately 1.2 million examples across 14 datasets spanning coding, math, science, reasoning, and general instruction-following. The goal was to assemble public datasets that could help the model achieve strong performance on benchmarks where Cerebrum excels.",
"## Model Details\n\n- Base Model: alpindale/Mistral-7B-v0.2-hf\n- Parameters: 7 billion \n- Fine-Tuning Dataset Size: ~1,200,000 examples\n- Fine-Tuning Data: Amalgamation of 14 public datasets\n- Language: English\n- License: Apache 2.0",
"## Intended Use\n\nOpenCerebrum-1.0-7B-SFT is intended to be a powerful open-source model for coding, math, science, and general question-answering and text generation tasks. Its diverse fine-tuning data aims to equip it with broad knowledge and reasoning capabilities.\n\nHowever, as an open-source replica trained on a subset of data compared to the original Cerebrum, it may not match Cerebrum's full performance. Additionally, biases and limitations of the fine-tuning data may be reflected in the model's outputs.",
"## Limitations and Biases\n\n- The model may have biases and limitations inherited from its fine-tuning datasets. Thorough testing is needed to characterize these.\n- With 1.2 million training examples, the fine-tuning data is still limited compared to the proprietary Cerebrum data.\n- As the model is based on a 7B parameter model, it has computational and memory constraints compared to larger models.",
"## Training Details\n\nThe model was fine-tuned on the 14 datasets listed in the Datasets section, totaling approximately 1.2 million examples. Default training hyperparameters were used. In the future, the fine-tuning dataset may be condensed to more closely match the 5,000 example dataset reputedly used for the original Cerebrum model."
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_mouse_2-seqsight_65536_512_47M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_mouse_2](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_2) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5242
- F1 Score: 0.8750
- Accuracy: 0.875
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.3964 | 9.52 | 200 | 0.3257 | 0.8488 | 0.8506 |
| 0.2775 | 19.05 | 400 | 0.2967 | 0.8563 | 0.8567 |
| 0.2431 | 28.57 | 600 | 0.2805 | 0.8687 | 0.8689 |
| 0.2149 | 38.1 | 800 | 0.2896 | 0.8870 | 0.8872 |
| 0.1892 | 47.62 | 1000 | 0.2798 | 0.8811 | 0.8811 |
| 0.1722 | 57.14 | 1200 | 0.3207 | 0.8840 | 0.8841 |
| 0.1489 | 66.67 | 1400 | 0.3516 | 0.8780 | 0.8780 |
| 0.1417 | 76.19 | 1600 | 0.3447 | 0.8748 | 0.875 |
| 0.1275 | 85.71 | 1800 | 0.3557 | 0.8872 | 0.8872 |
| 0.1144 | 95.24 | 2000 | 0.3438 | 0.8841 | 0.8841 |
| 0.1022 | 104.76 | 2200 | 0.3620 | 0.8901 | 0.8902 |
| 0.0968 | 114.29 | 2400 | 0.3779 | 0.8963 | 0.8963 |
| 0.091 | 123.81 | 2600 | 0.3865 | 0.8871 | 0.8872 |
| 0.0798 | 133.33 | 2800 | 0.3939 | 0.8750 | 0.875 |
| 0.0737 | 142.86 | 3000 | 0.4687 | 0.8777 | 0.8780 |
| 0.0698 | 152.38 | 3200 | 0.4192 | 0.8963 | 0.8963 |
| 0.0687 | 161.9 | 3400 | 0.4379 | 0.8901 | 0.8902 |
| 0.0614 | 171.43 | 3600 | 0.4795 | 0.8778 | 0.8780 |
| 0.0619 | 180.95 | 3800 | 0.4757 | 0.8869 | 0.8872 |
| 0.0537 | 190.48 | 4000 | 0.4562 | 0.8963 | 0.8963 |
| 0.0545 | 200.0 | 4200 | 0.4989 | 0.8778 | 0.8780 |
| 0.0507 | 209.52 | 4400 | 0.4625 | 0.8841 | 0.8841 |
| 0.0491 | 219.05 | 4600 | 0.5119 | 0.8839 | 0.8841 |
| 0.0501 | 228.57 | 4800 | 0.4785 | 0.8962 | 0.8963 |
| 0.0445 | 238.1 | 5000 | 0.5140 | 0.8778 | 0.8780 |
| 0.0429 | 247.62 | 5200 | 0.4812 | 0.8872 | 0.8872 |
| 0.045 | 257.14 | 5400 | 0.5032 | 0.8902 | 0.8902 |
| 0.0382 | 266.67 | 5600 | 0.5139 | 0.8901 | 0.8902 |
| 0.0409 | 276.19 | 5800 | 0.5122 | 0.8932 | 0.8933 |
| 0.0375 | 285.71 | 6000 | 0.5461 | 0.8777 | 0.8780 |
| 0.0336 | 295.24 | 6200 | 0.5440 | 0.8869 | 0.8872 |
| 0.0347 | 304.76 | 6400 | 0.5410 | 0.8901 | 0.8902 |
| 0.0312 | 314.29 | 6600 | 0.5536 | 0.8901 | 0.8902 |
| 0.032 | 323.81 | 6800 | 0.5701 | 0.8931 | 0.8933 |
| 0.035 | 333.33 | 7000 | 0.5255 | 0.8870 | 0.8872 |
| 0.0296 | 342.86 | 7200 | 0.6222 | 0.8807 | 0.8811 |
| 0.0323 | 352.38 | 7400 | 0.5536 | 0.8870 | 0.8872 |
| 0.0309 | 361.9 | 7600 | 0.5629 | 0.8869 | 0.8872 |
| 0.0305 | 371.43 | 7800 | 0.5216 | 0.8871 | 0.8872 |
| 0.0264 | 380.95 | 8000 | 0.6018 | 0.8775 | 0.8780 |
| 0.0278 | 390.48 | 8200 | 0.5967 | 0.8808 | 0.8811 |
| 0.0268 | 400.0 | 8400 | 0.5701 | 0.8901 | 0.8902 |
| 0.0284 | 409.52 | 8600 | 0.5754 | 0.8808 | 0.8811 |
| 0.0276 | 419.05 | 8800 | 0.5478 | 0.8902 | 0.8902 |
| 0.027 | 428.57 | 9000 | 0.5620 | 0.8870 | 0.8872 |
| 0.0232 | 438.1 | 9200 | 0.5838 | 0.8900 | 0.8902 |
| 0.0298 | 447.62 | 9400 | 0.5757 | 0.8840 | 0.8841 |
| 0.0271 | 457.14 | 9600 | 0.5786 | 0.8900 | 0.8902 |
| 0.0242 | 466.67 | 9800 | 0.5620 | 0.8870 | 0.8872 |
| 0.0259 | 476.19 | 10000 | 0.5620 | 0.8870 | 0.8872 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_mouse_2-seqsight_65536_512_47M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_mouse_2-seqsight_65536_512_47M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_65536_512_47M",
"region:us"
] | null | 2024-05-03T16:53:24+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us
| GUE\_mouse\_2-seqsight\_65536\_512\_47M-L8\_f
=============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_65536\_512\_47M on the mahdibaghbanzadeh/GUE\_mouse\_2 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5242
* F1 Score: 0.8750
* Accuracy: 0.875
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text-classification | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | dabagyan/bert-sarcasm-model | null | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T16:54:08+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #bert #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #bert #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | transformers |
# Uploaded model
- **Developed by:** dpriver
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "gguf"], "base_model": "unsloth/llama-3-8b-bnb-4bit"} | dpriver/model | null | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T16:54:19+00:00 | [] | [
"en"
] | TAGS
#transformers #gguf #llama #text-generation-inference #unsloth #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: dpriver
- License: apache-2.0
- Finetuned from model : unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: dpriver\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #gguf #llama #text-generation-inference #unsloth #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: dpriver\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_mouse_2-seqsight_65536_512_47M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_mouse_2](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_2) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5883
- F1 Score: 0.8628
- Accuracy: 0.8628
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.3645 | 9.52 | 200 | 0.2904 | 0.8685 | 0.8689 |
| 0.2332 | 19.05 | 400 | 0.2734 | 0.8841 | 0.8841 |
| 0.1873 | 28.57 | 600 | 0.3260 | 0.8684 | 0.8689 |
| 0.1497 | 38.1 | 800 | 0.3228 | 0.8902 | 0.8902 |
| 0.1189 | 47.62 | 1000 | 0.3362 | 0.8902 | 0.8902 |
| 0.0994 | 57.14 | 1200 | 0.4017 | 0.8841 | 0.8841 |
| 0.0742 | 66.67 | 1400 | 0.4739 | 0.8810 | 0.8811 |
| 0.0656 | 76.19 | 1600 | 0.4869 | 0.8718 | 0.8720 |
| 0.0532 | 85.71 | 1800 | 0.4801 | 0.8841 | 0.8841 |
| 0.0448 | 95.24 | 2000 | 0.4620 | 0.8902 | 0.8902 |
| 0.0403 | 104.76 | 2200 | 0.4691 | 0.8963 | 0.8963 |
| 0.0328 | 114.29 | 2400 | 0.5741 | 0.8841 | 0.8841 |
| 0.0323 | 123.81 | 2600 | 0.5977 | 0.8717 | 0.8720 |
| 0.0318 | 133.33 | 2800 | 0.5713 | 0.8653 | 0.8659 |
| 0.025 | 142.86 | 3000 | 0.5882 | 0.8902 | 0.8902 |
| 0.0226 | 152.38 | 3200 | 0.5815 | 0.8871 | 0.8872 |
| 0.0244 | 161.9 | 3400 | 0.6150 | 0.8869 | 0.8872 |
| 0.0217 | 171.43 | 3600 | 0.5968 | 0.8748 | 0.875 |
| 0.0176 | 180.95 | 3800 | 0.6338 | 0.8841 | 0.8841 |
| 0.0149 | 190.48 | 4000 | 0.6048 | 0.8810 | 0.8811 |
| 0.0176 | 200.0 | 4200 | 0.6294 | 0.8810 | 0.8811 |
| 0.0145 | 209.52 | 4400 | 0.6139 | 0.8811 | 0.8811 |
| 0.0126 | 219.05 | 4600 | 0.6751 | 0.8840 | 0.8841 |
| 0.0142 | 228.57 | 4800 | 0.6638 | 0.8687 | 0.8689 |
| 0.0123 | 238.1 | 5000 | 0.6573 | 0.8719 | 0.8720 |
| 0.012 | 247.62 | 5200 | 0.5845 | 0.8871 | 0.8872 |
| 0.0129 | 257.14 | 5400 | 0.6561 | 0.8933 | 0.8933 |
| 0.0113 | 266.67 | 5600 | 0.7041 | 0.8686 | 0.8689 |
| 0.0094 | 276.19 | 5800 | 0.7106 | 0.8809 | 0.8811 |
| 0.0125 | 285.71 | 6000 | 0.6203 | 0.8870 | 0.8872 |
| 0.0104 | 295.24 | 6200 | 0.6492 | 0.8902 | 0.8902 |
| 0.0094 | 304.76 | 6400 | 0.6602 | 0.8749 | 0.875 |
| 0.0075 | 314.29 | 6600 | 0.6598 | 0.8902 | 0.8902 |
| 0.0089 | 323.81 | 6800 | 0.7270 | 0.8871 | 0.8872 |
| 0.0085 | 333.33 | 7000 | 0.6682 | 0.8811 | 0.8811 |
| 0.0058 | 342.86 | 7200 | 0.7529 | 0.8932 | 0.8933 |
| 0.007 | 352.38 | 7400 | 0.7259 | 0.8871 | 0.8872 |
| 0.0066 | 361.9 | 7600 | 0.7356 | 0.8841 | 0.8841 |
| 0.0067 | 371.43 | 7800 | 0.7154 | 0.8810 | 0.8811 |
| 0.0066 | 380.95 | 8000 | 0.7417 | 0.8902 | 0.8902 |
| 0.0076 | 390.48 | 8200 | 0.7257 | 0.8841 | 0.8841 |
| 0.0066 | 400.0 | 8400 | 0.7069 | 0.8810 | 0.8811 |
| 0.0059 | 409.52 | 8600 | 0.7168 | 0.8872 | 0.8872 |
| 0.0042 | 419.05 | 8800 | 0.7106 | 0.8810 | 0.8811 |
| 0.0065 | 428.57 | 9000 | 0.7177 | 0.8841 | 0.8841 |
| 0.0038 | 438.1 | 9200 | 0.7353 | 0.8841 | 0.8841 |
| 0.0055 | 447.62 | 9400 | 0.7442 | 0.8871 | 0.8872 |
| 0.0052 | 457.14 | 9600 | 0.7419 | 0.8841 | 0.8841 |
| 0.0041 | 466.67 | 9800 | 0.7421 | 0.8841 | 0.8841 |
| 0.0051 | 476.19 | 10000 | 0.7426 | 0.8841 | 0.8841 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_mouse_2-seqsight_65536_512_47M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_mouse_2-seqsight_65536_512_47M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_65536_512_47M",
"region:us"
] | null | 2024-05-03T16:54:53+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us
| GUE\_mouse\_2-seqsight\_65536\_512\_47M-L32\_f
==============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_65536\_512\_47M on the mahdibaghbanzadeh/GUE\_mouse\_2 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5883
* F1 Score: 0.8628
* Accuracy: 0.8628
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | transformers |
# Uploaded model
- **Developed by:** xsa-dev
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "gguf"], "base_model": "unsloth/llama-3-8b-bnb-4bit"} | xsa-dev/hugs_llama3_technique_ft_8bit_Q8_0 | null | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T16:55:21+00:00 | [] | [
"en"
] | TAGS
#transformers #gguf #llama #text-generation-inference #unsloth #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: xsa-dev
- License: apache-2.0
- Finetuned from model : unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: xsa-dev\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #gguf #llama #text-generation-inference #unsloth #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: xsa-dev\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_splice_reconstructed-seqsight_65536_512_47M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_splice_reconstructed](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_splice_reconstructed) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4893
- F1 Score: 0.7920
- Accuracy: 0.7911
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.9712 | 0.7 | 200 | 0.9280 | 0.4284 | 0.5587 |
| 0.9193 | 1.4 | 400 | 0.8911 | 0.5372 | 0.5851 |
| 0.7622 | 2.1 | 600 | 0.6344 | 0.7222 | 0.7212 |
| 0.6373 | 2.8 | 800 | 0.6134 | 0.7349 | 0.7337 |
| 0.6223 | 3.5 | 1000 | 0.5896 | 0.7398 | 0.7387 |
| 0.6084 | 4.2 | 1200 | 0.5706 | 0.7555 | 0.7549 |
| 0.5971 | 4.9 | 1400 | 0.5575 | 0.7614 | 0.7611 |
| 0.5886 | 5.59 | 1600 | 0.5583 | 0.7602 | 0.7593 |
| 0.5806 | 6.29 | 1800 | 0.5558 | 0.7605 | 0.7593 |
| 0.5732 | 6.99 | 2000 | 0.5622 | 0.7574 | 0.7562 |
| 0.5696 | 7.69 | 2200 | 0.5306 | 0.7745 | 0.7742 |
| 0.5613 | 8.39 | 2400 | 0.5343 | 0.7704 | 0.7696 |
| 0.5534 | 9.09 | 2600 | 0.5402 | 0.7668 | 0.7659 |
| 0.5546 | 9.79 | 2800 | 0.5284 | 0.7769 | 0.7762 |
| 0.5559 | 10.49 | 3000 | 0.5319 | 0.7733 | 0.7722 |
| 0.5446 | 11.19 | 3200 | 0.5369 | 0.7696 | 0.7685 |
| 0.5478 | 11.89 | 3400 | 0.5145 | 0.7813 | 0.7806 |
| 0.5382 | 12.59 | 3600 | 0.5206 | 0.7788 | 0.7779 |
| 0.5409 | 13.29 | 3800 | 0.5212 | 0.7783 | 0.7773 |
| 0.5424 | 13.99 | 4000 | 0.5214 | 0.7782 | 0.7771 |
| 0.5325 | 14.69 | 4200 | 0.5168 | 0.7775 | 0.7764 |
| 0.5322 | 15.38 | 4400 | 0.5240 | 0.7739 | 0.7727 |
| 0.5249 | 16.08 | 4600 | 0.5278 | 0.7767 | 0.7755 |
| 0.5281 | 16.78 | 4800 | 0.5086 | 0.7844 | 0.7834 |
| 0.5245 | 17.48 | 5000 | 0.5128 | 0.7830 | 0.7819 |
| 0.5203 | 18.18 | 5200 | 0.4971 | 0.7939 | 0.7933 |
| 0.5215 | 18.88 | 5400 | 0.5132 | 0.7826 | 0.7815 |
| 0.5247 | 19.58 | 5600 | 0.4929 | 0.7940 | 0.7933 |
| 0.5168 | 20.28 | 5800 | 0.4974 | 0.7900 | 0.7891 |
| 0.5152 | 20.98 | 6000 | 0.4947 | 0.7921 | 0.7913 |
| 0.5202 | 21.68 | 6200 | 0.5053 | 0.7878 | 0.7867 |
| 0.5135 | 22.38 | 6400 | 0.5030 | 0.7863 | 0.7852 |
| 0.5079 | 23.08 | 6600 | 0.4934 | 0.7917 | 0.7909 |
| 0.5158 | 23.78 | 6800 | 0.4967 | 0.7886 | 0.7876 |
| 0.5136 | 24.48 | 7000 | 0.5089 | 0.7835 | 0.7823 |
| 0.5109 | 25.17 | 7200 | 0.4898 | 0.7951 | 0.7942 |
| 0.511 | 25.87 | 7400 | 0.4974 | 0.7898 | 0.7887 |
| 0.5069 | 26.57 | 7600 | 0.4992 | 0.7900 | 0.7889 |
| 0.5063 | 27.27 | 7800 | 0.4970 | 0.7915 | 0.7904 |
| 0.5054 | 27.97 | 8000 | 0.4921 | 0.7926 | 0.7915 |
| 0.5104 | 28.67 | 8200 | 0.5018 | 0.7887 | 0.7876 |
| 0.5086 | 29.37 | 8400 | 0.4981 | 0.7902 | 0.7891 |
| 0.5038 | 30.07 | 8600 | 0.4852 | 0.7968 | 0.7959 |
| 0.503 | 30.77 | 8800 | 0.4941 | 0.7920 | 0.7909 |
| 0.5017 | 31.47 | 9000 | 0.4915 | 0.7928 | 0.7918 |
| 0.5068 | 32.17 | 9200 | 0.4919 | 0.7930 | 0.7920 |
| 0.5041 | 32.87 | 9400 | 0.4911 | 0.7937 | 0.7926 |
| 0.5064 | 33.57 | 9600 | 0.4943 | 0.7909 | 0.7898 |
| 0.5021 | 34.27 | 9800 | 0.4911 | 0.7930 | 0.7920 |
| 0.5043 | 34.97 | 10000 | 0.4919 | 0.7926 | 0.7915 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_splice_reconstructed-seqsight_65536_512_47M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_splice_reconstructed-seqsight_65536_512_47M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_65536_512_47M",
"region:us"
] | null | 2024-05-03T16:56:36+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us
| GUE\_splice\_reconstructed-seqsight\_65536\_512\_47M-L1\_f
==========================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_65536\_512\_47M on the mahdibaghbanzadeh/GUE\_splice\_reconstructed dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4893
* F1 Score: 0.7920
* Accuracy: 0.7911
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | golf2248/kkngu4g | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-03T16:56:46+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_splice_reconstructed-seqsight_65536_512_47M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_splice_reconstructed](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_splice_reconstructed) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3998
- F1 Score: 0.8368
- Accuracy: 0.8360
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.9576 | 0.7 | 200 | 0.8977 | 0.5076 | 0.5756 |
| 0.7323 | 1.4 | 400 | 0.6126 | 0.7343 | 0.7332 |
| 0.6088 | 2.1 | 600 | 0.5636 | 0.7585 | 0.7576 |
| 0.577 | 2.8 | 800 | 0.5702 | 0.7504 | 0.7495 |
| 0.5636 | 3.5 | 1000 | 0.5334 | 0.7703 | 0.7692 |
| 0.5511 | 4.2 | 1200 | 0.5112 | 0.7841 | 0.7834 |
| 0.5352 | 4.9 | 1400 | 0.5002 | 0.7869 | 0.7863 |
| 0.5245 | 5.59 | 1600 | 0.5010 | 0.7891 | 0.7883 |
| 0.5129 | 6.29 | 1800 | 0.4943 | 0.7872 | 0.7861 |
| 0.4985 | 6.99 | 2000 | 0.4938 | 0.7917 | 0.7907 |
| 0.4944 | 7.69 | 2200 | 0.4628 | 0.8103 | 0.8097 |
| 0.4825 | 8.39 | 2400 | 0.4772 | 0.8004 | 0.7994 |
| 0.4738 | 9.09 | 2600 | 0.4807 | 0.7942 | 0.7929 |
| 0.4711 | 9.79 | 2800 | 0.4627 | 0.8072 | 0.8062 |
| 0.4678 | 10.49 | 3000 | 0.4574 | 0.8103 | 0.8093 |
| 0.4561 | 11.19 | 3200 | 0.4477 | 0.8120 | 0.8110 |
| 0.4569 | 11.89 | 3400 | 0.4407 | 0.8139 | 0.8130 |
| 0.4483 | 12.59 | 3600 | 0.4412 | 0.8176 | 0.8167 |
| 0.4489 | 13.29 | 3800 | 0.4381 | 0.8164 | 0.8154 |
| 0.443 | 13.99 | 4000 | 0.4467 | 0.8167 | 0.8157 |
| 0.4326 | 14.69 | 4200 | 0.4359 | 0.8202 | 0.8192 |
| 0.4351 | 15.38 | 4400 | 0.4307 | 0.8229 | 0.8220 |
| 0.4233 | 16.08 | 4600 | 0.4539 | 0.8141 | 0.8132 |
| 0.4261 | 16.78 | 4800 | 0.4231 | 0.8294 | 0.8284 |
| 0.4168 | 17.48 | 5000 | 0.4412 | 0.8210 | 0.8198 |
| 0.413 | 18.18 | 5200 | 0.4127 | 0.8360 | 0.8352 |
| 0.413 | 18.88 | 5400 | 0.4177 | 0.8339 | 0.8330 |
| 0.413 | 19.58 | 5600 | 0.4017 | 0.8389 | 0.8382 |
| 0.4072 | 20.28 | 5800 | 0.4075 | 0.8395 | 0.8387 |
| 0.4077 | 20.98 | 6000 | 0.4081 | 0.8363 | 0.8354 |
| 0.4022 | 21.68 | 6200 | 0.4271 | 0.8262 | 0.8253 |
| 0.4001 | 22.38 | 6400 | 0.4101 | 0.8347 | 0.8338 |
| 0.3911 | 23.08 | 6600 | 0.4135 | 0.8324 | 0.8314 |
| 0.3968 | 23.78 | 6800 | 0.4060 | 0.8356 | 0.8347 |
| 0.3946 | 24.48 | 7000 | 0.4186 | 0.8299 | 0.8290 |
| 0.396 | 25.17 | 7200 | 0.3987 | 0.8391 | 0.8382 |
| 0.3933 | 25.87 | 7400 | 0.4123 | 0.8334 | 0.8325 |
| 0.3892 | 26.57 | 7600 | 0.4239 | 0.8273 | 0.8264 |
| 0.3843 | 27.27 | 7800 | 0.4167 | 0.8312 | 0.8303 |
| 0.3886 | 27.97 | 8000 | 0.4022 | 0.8393 | 0.8384 |
| 0.3912 | 28.67 | 8200 | 0.4114 | 0.8330 | 0.8321 |
| 0.385 | 29.37 | 8400 | 0.4063 | 0.8350 | 0.8341 |
| 0.3845 | 30.07 | 8600 | 0.3950 | 0.8397 | 0.8389 |
| 0.3839 | 30.77 | 8800 | 0.4045 | 0.8363 | 0.8354 |
| 0.38 | 31.47 | 9000 | 0.3989 | 0.8393 | 0.8384 |
| 0.3848 | 32.17 | 9200 | 0.4036 | 0.8367 | 0.8358 |
| 0.3758 | 32.87 | 9400 | 0.4023 | 0.8371 | 0.8363 |
| 0.3849 | 33.57 | 9600 | 0.4066 | 0.8358 | 0.8349 |
| 0.3781 | 34.27 | 9800 | 0.4034 | 0.8358 | 0.8349 |
| 0.3777 | 34.97 | 10000 | 0.4038 | 0.8363 | 0.8354 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_splice_reconstructed-seqsight_65536_512_47M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_splice_reconstructed-seqsight_65536_512_47M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_65536_512_47M",
"region:us"
] | null | 2024-05-03T16:56:55+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us
| GUE\_splice\_reconstructed-seqsight\_65536\_512\_47M-L8\_f
==========================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_65536\_512\_47M on the mahdibaghbanzadeh/GUE\_splice\_reconstructed dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3998
* F1 Score: 0.8368
* Accuracy: 0.8360
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_splice_reconstructed-seqsight_65536_512_47M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_splice_reconstructed](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_splice_reconstructed) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3508
- F1 Score: 0.8581
- Accuracy: 0.8577
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.9474 | 0.7 | 200 | 0.8440 | 0.5497 | 0.6094 |
| 0.6437 | 1.4 | 400 | 0.5782 | 0.7461 | 0.7453 |
| 0.5627 | 2.1 | 600 | 0.5120 | 0.7830 | 0.7821 |
| 0.5223 | 2.8 | 800 | 0.5177 | 0.7806 | 0.7797 |
| 0.4979 | 3.5 | 1000 | 0.4653 | 0.8050 | 0.8045 |
| 0.4893 | 4.2 | 1200 | 0.4635 | 0.8064 | 0.8053 |
| 0.469 | 4.9 | 1400 | 0.4467 | 0.8154 | 0.8146 |
| 0.4555 | 5.59 | 1600 | 0.4466 | 0.8149 | 0.8143 |
| 0.4423 | 6.29 | 1800 | 0.4464 | 0.8155 | 0.8146 |
| 0.4283 | 6.99 | 2000 | 0.4328 | 0.8254 | 0.8244 |
| 0.4205 | 7.69 | 2200 | 0.4151 | 0.8286 | 0.8279 |
| 0.4142 | 8.39 | 2400 | 0.4213 | 0.8278 | 0.8270 |
| 0.402 | 9.09 | 2600 | 0.4412 | 0.8212 | 0.8200 |
| 0.3957 | 9.79 | 2800 | 0.4212 | 0.8266 | 0.8257 |
| 0.3939 | 10.49 | 3000 | 0.3985 | 0.8401 | 0.8393 |
| 0.3785 | 11.19 | 3200 | 0.4036 | 0.8386 | 0.8378 |
| 0.3829 | 11.89 | 3400 | 0.3994 | 0.8369 | 0.8360 |
| 0.3685 | 12.59 | 3600 | 0.3840 | 0.8456 | 0.8450 |
| 0.371 | 13.29 | 3800 | 0.3734 | 0.8487 | 0.8481 |
| 0.366 | 13.99 | 4000 | 0.3977 | 0.8425 | 0.8417 |
| 0.3548 | 14.69 | 4200 | 0.3847 | 0.8456 | 0.8448 |
| 0.3572 | 15.38 | 4400 | 0.3818 | 0.8466 | 0.8459 |
| 0.3445 | 16.08 | 4600 | 0.3968 | 0.8436 | 0.8428 |
| 0.3461 | 16.78 | 4800 | 0.3712 | 0.8518 | 0.8512 |
| 0.3374 | 17.48 | 5000 | 0.3832 | 0.8489 | 0.8481 |
| 0.3325 | 18.18 | 5200 | 0.3729 | 0.8516 | 0.8509 |
| 0.3346 | 18.88 | 5400 | 0.3818 | 0.8462 | 0.8455 |
| 0.3334 | 19.58 | 5600 | 0.3550 | 0.8610 | 0.8606 |
| 0.3331 | 20.28 | 5800 | 0.3638 | 0.8579 | 0.8573 |
| 0.3291 | 20.98 | 6000 | 0.3564 | 0.8581 | 0.8575 |
| 0.3208 | 21.68 | 6200 | 0.3759 | 0.8521 | 0.8514 |
| 0.3228 | 22.38 | 6400 | 0.3707 | 0.8545 | 0.8538 |
| 0.315 | 23.08 | 6600 | 0.3818 | 0.8541 | 0.8534 |
| 0.3188 | 23.78 | 6800 | 0.3773 | 0.8536 | 0.8529 |
| 0.3145 | 24.48 | 7000 | 0.3810 | 0.8534 | 0.8527 |
| 0.3161 | 25.17 | 7200 | 0.3666 | 0.8545 | 0.8538 |
| 0.3117 | 25.87 | 7400 | 0.3760 | 0.8556 | 0.8549 |
| 0.3084 | 26.57 | 7600 | 0.3858 | 0.8480 | 0.8472 |
| 0.3054 | 27.27 | 7800 | 0.3875 | 0.8497 | 0.8490 |
| 0.308 | 27.97 | 8000 | 0.3650 | 0.8593 | 0.8586 |
| 0.3059 | 28.67 | 8200 | 0.3730 | 0.8547 | 0.8540 |
| 0.3053 | 29.37 | 8400 | 0.3672 | 0.8552 | 0.8544 |
| 0.2975 | 30.07 | 8600 | 0.3638 | 0.8599 | 0.8593 |
| 0.2965 | 30.77 | 8800 | 0.3696 | 0.8558 | 0.8551 |
| 0.297 | 31.47 | 9000 | 0.3701 | 0.8545 | 0.8538 |
| 0.3035 | 32.17 | 9200 | 0.3651 | 0.8580 | 0.8573 |
| 0.2948 | 32.87 | 9400 | 0.3681 | 0.8553 | 0.8547 |
| 0.2976 | 33.57 | 9600 | 0.3725 | 0.8560 | 0.8553 |
| 0.2947 | 34.27 | 9800 | 0.3682 | 0.8569 | 0.8562 |
| 0.2938 | 34.97 | 10000 | 0.3698 | 0.8564 | 0.8558 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_splice_reconstructed-seqsight_65536_512_47M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_splice_reconstructed-seqsight_65536_512_47M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_65536_512_47M",
"region:us"
] | null | 2024-05-03T16:57:12+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us
| GUE\_splice\_reconstructed-seqsight\_65536\_512\_47M-L32\_f
===========================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_65536\_512\_47M on the mahdibaghbanzadeh/GUE\_splice\_reconstructed dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3508
* F1 Score: 0.8581
* Accuracy: 0.8577
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_tf_0-seqsight_65536_512_47M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_tf_0](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_0) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3808
- F1 Score: 0.8262
- Accuracy: 0.827
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5459 | 0.79 | 200 | 0.4805 | 0.7586 | 0.759 |
| 0.4886 | 1.58 | 400 | 0.4690 | 0.7665 | 0.767 |
| 0.4866 | 2.37 | 600 | 0.4709 | 0.7625 | 0.764 |
| 0.4748 | 3.16 | 800 | 0.4670 | 0.7649 | 0.765 |
| 0.4742 | 3.95 | 1000 | 0.4647 | 0.7630 | 0.763 |
| 0.4731 | 4.74 | 1200 | 0.4678 | 0.7671 | 0.767 |
| 0.4702 | 5.53 | 1400 | 0.4616 | 0.7659 | 0.766 |
| 0.4647 | 6.32 | 1600 | 0.4632 | 0.7631 | 0.763 |
| 0.468 | 7.11 | 1800 | 0.4685 | 0.76 | 0.76 |
| 0.4668 | 7.91 | 2000 | 0.4611 | 0.7641 | 0.764 |
| 0.4625 | 8.7 | 2200 | 0.4625 | 0.7641 | 0.764 |
| 0.4594 | 9.49 | 2400 | 0.4583 | 0.7699 | 0.77 |
| 0.4608 | 10.28 | 2600 | 0.4690 | 0.7676 | 0.768 |
| 0.4585 | 11.07 | 2800 | 0.4645 | 0.7637 | 0.764 |
| 0.4596 | 11.86 | 3000 | 0.4615 | 0.7680 | 0.768 |
| 0.4593 | 12.65 | 3200 | 0.4650 | 0.7727 | 0.773 |
| 0.4553 | 13.44 | 3400 | 0.4540 | 0.7750 | 0.775 |
| 0.4536 | 14.23 | 3600 | 0.4534 | 0.7780 | 0.778 |
| 0.4539 | 15.02 | 3800 | 0.4592 | 0.7710 | 0.771 |
| 0.4573 | 15.81 | 4000 | 0.4610 | 0.7718 | 0.772 |
| 0.4499 | 16.6 | 4200 | 0.4534 | 0.7800 | 0.78 |
| 0.4556 | 17.39 | 4400 | 0.4605 | 0.7698 | 0.77 |
| 0.4543 | 18.18 | 4600 | 0.4560 | 0.7720 | 0.772 |
| 0.4509 | 18.97 | 4800 | 0.4665 | 0.7694 | 0.77 |
| 0.4584 | 19.76 | 5000 | 0.4503 | 0.7761 | 0.776 |
| 0.4505 | 20.55 | 5200 | 0.4473 | 0.7800 | 0.78 |
| 0.4502 | 21.34 | 5400 | 0.4520 | 0.774 | 0.774 |
| 0.4469 | 22.13 | 5600 | 0.4550 | 0.7730 | 0.773 |
| 0.454 | 22.92 | 5800 | 0.4509 | 0.7771 | 0.777 |
| 0.4473 | 23.72 | 6000 | 0.4558 | 0.7749 | 0.775 |
| 0.4475 | 24.51 | 6200 | 0.4493 | 0.7771 | 0.777 |
| 0.4537 | 25.3 | 6400 | 0.4526 | 0.7740 | 0.774 |
| 0.4457 | 26.09 | 6600 | 0.4504 | 0.7750 | 0.775 |
| 0.446 | 26.88 | 6800 | 0.4531 | 0.7770 | 0.777 |
| 0.4496 | 27.67 | 7000 | 0.4484 | 0.7821 | 0.782 |
| 0.4482 | 28.46 | 7200 | 0.4471 | 0.7790 | 0.779 |
| 0.4476 | 29.25 | 7400 | 0.4488 | 0.7810 | 0.781 |
| 0.4499 | 30.04 | 7600 | 0.4475 | 0.7791 | 0.779 |
| 0.4467 | 30.83 | 7800 | 0.4508 | 0.7810 | 0.781 |
| 0.4477 | 31.62 | 8000 | 0.4461 | 0.7830 | 0.783 |
| 0.4468 | 32.41 | 8200 | 0.4516 | 0.7810 | 0.781 |
| 0.4442 | 33.2 | 8400 | 0.4512 | 0.7770 | 0.777 |
| 0.4501 | 33.99 | 8600 | 0.4484 | 0.7801 | 0.78 |
| 0.4484 | 34.78 | 8800 | 0.4477 | 0.7811 | 0.781 |
| 0.443 | 35.57 | 9000 | 0.4501 | 0.7780 | 0.778 |
| 0.4458 | 36.36 | 9200 | 0.4526 | 0.7800 | 0.78 |
| 0.4468 | 37.15 | 9400 | 0.4522 | 0.7800 | 0.78 |
| 0.4451 | 37.94 | 9600 | 0.4497 | 0.7810 | 0.781 |
| 0.4482 | 38.74 | 9800 | 0.4506 | 0.7790 | 0.779 |
| 0.4461 | 39.53 | 10000 | 0.4504 | 0.7800 | 0.78 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_tf_0-seqsight_65536_512_47M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_tf_0-seqsight_65536_512_47M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_65536_512_47M",
"region:us"
] | null | 2024-05-03T16:57:13+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us
| GUE\_tf\_0-seqsight\_65536\_512\_47M-L1\_f
==========================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_65536\_512\_47M on the mahdibaghbanzadeh/GUE\_tf\_0 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3808
* F1 Score: 0.8262
* Accuracy: 0.827
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_tf_0-seqsight_65536_512_47M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_tf_0](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_0) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3741
- F1 Score: 0.8225
- Accuracy: 0.823
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5198 | 0.79 | 200 | 0.4707 | 0.7584 | 0.759 |
| 0.4744 | 1.58 | 400 | 0.4591 | 0.7701 | 0.771 |
| 0.4702 | 2.37 | 600 | 0.4603 | 0.7721 | 0.773 |
| 0.4583 | 3.16 | 800 | 0.4562 | 0.7767 | 0.777 |
| 0.4564 | 3.95 | 1000 | 0.4510 | 0.7801 | 0.781 |
| 0.4522 | 4.74 | 1200 | 0.4505 | 0.7768 | 0.777 |
| 0.4473 | 5.53 | 1400 | 0.4625 | 0.7706 | 0.771 |
| 0.44 | 6.32 | 1600 | 0.4497 | 0.7861 | 0.786 |
| 0.4436 | 7.11 | 1800 | 0.4658 | 0.7684 | 0.769 |
| 0.4381 | 7.91 | 2000 | 0.4499 | 0.7840 | 0.784 |
| 0.4343 | 8.7 | 2200 | 0.4545 | 0.7701 | 0.77 |
| 0.4297 | 9.49 | 2400 | 0.4471 | 0.7780 | 0.778 |
| 0.431 | 10.28 | 2600 | 0.4503 | 0.7869 | 0.787 |
| 0.4261 | 11.07 | 2800 | 0.4582 | 0.7845 | 0.785 |
| 0.4254 | 11.86 | 3000 | 0.4536 | 0.7850 | 0.785 |
| 0.4246 | 12.65 | 3200 | 0.4456 | 0.7910 | 0.791 |
| 0.4169 | 13.44 | 3400 | 0.4557 | 0.7771 | 0.777 |
| 0.4165 | 14.23 | 3600 | 0.4404 | 0.7899 | 0.79 |
| 0.4176 | 15.02 | 3800 | 0.4494 | 0.7830 | 0.783 |
| 0.4182 | 15.81 | 4000 | 0.4410 | 0.7890 | 0.789 |
| 0.4096 | 16.6 | 4200 | 0.4486 | 0.7860 | 0.786 |
| 0.416 | 17.39 | 4400 | 0.4556 | 0.7859 | 0.786 |
| 0.4119 | 18.18 | 4600 | 0.4499 | 0.7941 | 0.794 |
| 0.4097 | 18.97 | 4800 | 0.4572 | 0.7839 | 0.784 |
| 0.4137 | 19.76 | 5000 | 0.4425 | 0.7930 | 0.793 |
| 0.4066 | 20.55 | 5200 | 0.4477 | 0.7930 | 0.793 |
| 0.4037 | 21.34 | 5400 | 0.4497 | 0.7860 | 0.786 |
| 0.4024 | 22.13 | 5600 | 0.4530 | 0.7881 | 0.788 |
| 0.4045 | 22.92 | 5800 | 0.4496 | 0.7910 | 0.791 |
| 0.3986 | 23.72 | 6000 | 0.4513 | 0.7881 | 0.788 |
| 0.3987 | 24.51 | 6200 | 0.4456 | 0.7900 | 0.79 |
| 0.4005 | 25.3 | 6400 | 0.4478 | 0.7940 | 0.794 |
| 0.396 | 26.09 | 6600 | 0.4442 | 0.7910 | 0.791 |
| 0.3945 | 26.88 | 6800 | 0.4540 | 0.7830 | 0.783 |
| 0.3941 | 27.67 | 7000 | 0.4504 | 0.7931 | 0.793 |
| 0.3945 | 28.46 | 7200 | 0.4486 | 0.7940 | 0.794 |
| 0.3943 | 29.25 | 7400 | 0.4523 | 0.7881 | 0.788 |
| 0.3957 | 30.04 | 7600 | 0.4509 | 0.7901 | 0.79 |
| 0.3875 | 30.83 | 7800 | 0.4540 | 0.7890 | 0.789 |
| 0.3906 | 31.62 | 8000 | 0.4476 | 0.7930 | 0.793 |
| 0.3906 | 32.41 | 8200 | 0.4518 | 0.7901 | 0.79 |
| 0.3881 | 33.2 | 8400 | 0.4533 | 0.7881 | 0.788 |
| 0.3925 | 33.99 | 8600 | 0.4573 | 0.7830 | 0.783 |
| 0.3905 | 34.78 | 8800 | 0.4494 | 0.7900 | 0.79 |
| 0.3836 | 35.57 | 9000 | 0.4532 | 0.7921 | 0.792 |
| 0.3867 | 36.36 | 9200 | 0.4591 | 0.79 | 0.79 |
| 0.3886 | 37.15 | 9400 | 0.4594 | 0.79 | 0.79 |
| 0.387 | 37.94 | 9600 | 0.4553 | 0.7891 | 0.789 |
| 0.3879 | 38.74 | 9800 | 0.4559 | 0.7901 | 0.79 |
| 0.3833 | 39.53 | 10000 | 0.4562 | 0.7891 | 0.789 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_tf_0-seqsight_65536_512_47M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_tf_0-seqsight_65536_512_47M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_65536_512_47M",
"region:us"
] | null | 2024-05-03T16:58:02+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us
| GUE\_tf\_0-seqsight\_65536\_512\_47M-L32\_f
===========================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_65536\_512\_47M on the mahdibaghbanzadeh/GUE\_tf\_0 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3741
* F1 Score: 0.8225
* Accuracy: 0.823
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_tf_0-seqsight_65536_512_47M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_tf_0](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_0) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3793
- F1 Score: 0.8292
- Accuracy: 0.83
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5291 | 0.79 | 200 | 0.4742 | 0.7544 | 0.755 |
| 0.4793 | 1.58 | 400 | 0.4651 | 0.7628 | 0.764 |
| 0.4762 | 2.37 | 600 | 0.4678 | 0.7638 | 0.766 |
| 0.4656 | 3.16 | 800 | 0.4603 | 0.7729 | 0.773 |
| 0.4649 | 3.95 | 1000 | 0.4592 | 0.7679 | 0.769 |
| 0.4622 | 4.74 | 1200 | 0.4592 | 0.7639 | 0.764 |
| 0.4576 | 5.53 | 1400 | 0.4620 | 0.7689 | 0.769 |
| 0.4532 | 6.32 | 1600 | 0.4548 | 0.7721 | 0.772 |
| 0.4555 | 7.11 | 1800 | 0.4651 | 0.7709 | 0.771 |
| 0.4516 | 7.91 | 2000 | 0.4563 | 0.7730 | 0.773 |
| 0.4486 | 8.7 | 2200 | 0.4538 | 0.7730 | 0.773 |
| 0.4447 | 9.49 | 2400 | 0.4488 | 0.7830 | 0.783 |
| 0.446 | 10.28 | 2600 | 0.4535 | 0.7709 | 0.771 |
| 0.4422 | 11.07 | 2800 | 0.4584 | 0.7686 | 0.769 |
| 0.4423 | 11.86 | 3000 | 0.4536 | 0.7790 | 0.779 |
| 0.4418 | 12.65 | 3200 | 0.4499 | 0.7790 | 0.779 |
| 0.4367 | 13.44 | 3400 | 0.4469 | 0.7871 | 0.787 |
| 0.4352 | 14.23 | 3600 | 0.4440 | 0.7920 | 0.792 |
| 0.4367 | 15.02 | 3800 | 0.4526 | 0.7750 | 0.775 |
| 0.4381 | 15.81 | 4000 | 0.4469 | 0.7791 | 0.779 |
| 0.4294 | 16.6 | 4200 | 0.4468 | 0.7890 | 0.789 |
| 0.4365 | 17.39 | 4400 | 0.4595 | 0.7657 | 0.766 |
| 0.4333 | 18.18 | 4600 | 0.4469 | 0.7831 | 0.783 |
| 0.4307 | 18.97 | 4800 | 0.4567 | 0.7698 | 0.77 |
| 0.4373 | 19.76 | 5000 | 0.4450 | 0.7821 | 0.782 |
| 0.4297 | 20.55 | 5200 | 0.4435 | 0.7890 | 0.789 |
| 0.4276 | 21.34 | 5400 | 0.4492 | 0.7790 | 0.779 |
| 0.4279 | 22.13 | 5600 | 0.4503 | 0.7801 | 0.78 |
| 0.4314 | 22.92 | 5800 | 0.4471 | 0.7821 | 0.782 |
| 0.4252 | 23.72 | 6000 | 0.4479 | 0.7771 | 0.777 |
| 0.4253 | 24.51 | 6200 | 0.4454 | 0.7820 | 0.782 |
| 0.4308 | 25.3 | 6400 | 0.4440 | 0.7880 | 0.788 |
| 0.4225 | 26.09 | 6600 | 0.4435 | 0.7860 | 0.786 |
| 0.4237 | 26.88 | 6800 | 0.4477 | 0.7791 | 0.779 |
| 0.4234 | 27.67 | 7000 | 0.4441 | 0.7890 | 0.789 |
| 0.4261 | 28.46 | 7200 | 0.4437 | 0.7859 | 0.786 |
| 0.4229 | 29.25 | 7400 | 0.4470 | 0.7861 | 0.786 |
| 0.4265 | 30.04 | 7600 | 0.4451 | 0.7850 | 0.785 |
| 0.4204 | 30.83 | 7800 | 0.4475 | 0.7850 | 0.785 |
| 0.4231 | 31.62 | 8000 | 0.4417 | 0.7849 | 0.785 |
| 0.4232 | 32.41 | 8200 | 0.4475 | 0.7811 | 0.781 |
| 0.4202 | 33.2 | 8400 | 0.4474 | 0.7821 | 0.782 |
| 0.4255 | 33.99 | 8600 | 0.4461 | 0.7830 | 0.783 |
| 0.4223 | 34.78 | 8800 | 0.4442 | 0.786 | 0.786 |
| 0.4169 | 35.57 | 9000 | 0.4456 | 0.7860 | 0.786 |
| 0.4204 | 36.36 | 9200 | 0.4498 | 0.7841 | 0.784 |
| 0.4221 | 37.15 | 9400 | 0.4491 | 0.7791 | 0.779 |
| 0.4203 | 37.94 | 9600 | 0.4464 | 0.7800 | 0.78 |
| 0.4223 | 38.74 | 9800 | 0.4472 | 0.7821 | 0.782 |
| 0.4188 | 39.53 | 10000 | 0.4469 | 0.7811 | 0.781 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_tf_0-seqsight_65536_512_47M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_tf_0-seqsight_65536_512_47M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_65536_512_47M",
"region:us"
] | null | 2024-05-03T16:58:04+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us
| GUE\_tf\_0-seqsight\_65536\_512\_47M-L8\_f
==========================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_65536\_512\_47M on the mahdibaghbanzadeh/GUE\_tf\_0 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3793
* F1 Score: 0.8292
* Accuracy: 0.83
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_tf_1-seqsight_65536_512_47M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_tf_1](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_1) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3515
- F1 Score: 0.8466
- Accuracy: 0.847
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5616 | 0.83 | 200 | 0.5355 | 0.7350 | 0.735 |
| 0.5085 | 1.67 | 400 | 0.5291 | 0.7389 | 0.739 |
| 0.4987 | 2.5 | 600 | 0.5290 | 0.7349 | 0.735 |
| 0.4953 | 3.33 | 800 | 0.5366 | 0.7267 | 0.728 |
| 0.4952 | 4.17 | 1000 | 0.5239 | 0.7350 | 0.735 |
| 0.4878 | 5.0 | 1200 | 0.5274 | 0.7400 | 0.74 |
| 0.487 | 5.83 | 1400 | 0.5216 | 0.7380 | 0.738 |
| 0.4877 | 6.67 | 1600 | 0.5217 | 0.7419 | 0.742 |
| 0.4852 | 7.5 | 1800 | 0.5164 | 0.7440 | 0.744 |
| 0.481 | 8.33 | 2000 | 0.5176 | 0.7400 | 0.74 |
| 0.4818 | 9.17 | 2200 | 0.5164 | 0.7498 | 0.75 |
| 0.4828 | 10.0 | 2400 | 0.5210 | 0.7397 | 0.74 |
| 0.4823 | 10.83 | 2600 | 0.5184 | 0.7395 | 0.74 |
| 0.482 | 11.67 | 2800 | 0.5175 | 0.7374 | 0.738 |
| 0.4744 | 12.5 | 3000 | 0.5166 | 0.7437 | 0.744 |
| 0.4852 | 13.33 | 3200 | 0.5082 | 0.7470 | 0.747 |
| 0.4753 | 14.17 | 3400 | 0.5097 | 0.7510 | 0.751 |
| 0.4744 | 15.0 | 3600 | 0.5173 | 0.7415 | 0.743 |
| 0.4751 | 15.83 | 3800 | 0.5110 | 0.7480 | 0.748 |
| 0.4766 | 16.67 | 4000 | 0.5141 | 0.7496 | 0.75 |
| 0.4743 | 17.5 | 4200 | 0.5116 | 0.7463 | 0.747 |
| 0.4682 | 18.33 | 4400 | 0.5178 | 0.7499 | 0.75 |
| 0.4784 | 19.17 | 4600 | 0.5126 | 0.7495 | 0.75 |
| 0.4742 | 20.0 | 4800 | 0.5082 | 0.7529 | 0.753 |
| 0.4749 | 20.83 | 5000 | 0.5113 | 0.7579 | 0.758 |
| 0.47 | 21.67 | 5200 | 0.5097 | 0.7549 | 0.755 |
| 0.4695 | 22.5 | 5400 | 0.5081 | 0.7550 | 0.755 |
| 0.4702 | 23.33 | 5600 | 0.5095 | 0.7578 | 0.758 |
| 0.4707 | 24.17 | 5800 | 0.5140 | 0.7536 | 0.754 |
| 0.4723 | 25.0 | 6000 | 0.5063 | 0.7600 | 0.76 |
| 0.4702 | 25.83 | 6200 | 0.5059 | 0.7578 | 0.758 |
| 0.4681 | 26.67 | 6400 | 0.5072 | 0.7520 | 0.752 |
| 0.471 | 27.5 | 6600 | 0.5095 | 0.7577 | 0.758 |
| 0.4688 | 28.33 | 6800 | 0.5063 | 0.7557 | 0.756 |
| 0.467 | 29.17 | 7000 | 0.5079 | 0.7567 | 0.757 |
| 0.4679 | 30.0 | 7200 | 0.5081 | 0.7567 | 0.757 |
| 0.4697 | 30.83 | 7400 | 0.5116 | 0.7521 | 0.753 |
| 0.4635 | 31.67 | 7600 | 0.5058 | 0.7619 | 0.762 |
| 0.4712 | 32.5 | 7800 | 0.5062 | 0.7608 | 0.761 |
| 0.4645 | 33.33 | 8000 | 0.5061 | 0.7587 | 0.759 |
| 0.4693 | 34.17 | 8200 | 0.5061 | 0.7567 | 0.757 |
| 0.4665 | 35.0 | 8400 | 0.5048 | 0.7619 | 0.762 |
| 0.47 | 35.83 | 8600 | 0.5040 | 0.7598 | 0.76 |
| 0.4704 | 36.67 | 8800 | 0.5040 | 0.7607 | 0.761 |
| 0.4649 | 37.5 | 9000 | 0.5090 | 0.7541 | 0.755 |
| 0.4646 | 38.33 | 9200 | 0.5054 | 0.7638 | 0.764 |
| 0.4669 | 39.17 | 9400 | 0.5052 | 0.7628 | 0.763 |
| 0.4654 | 40.0 | 9600 | 0.5058 | 0.7628 | 0.763 |
| 0.4677 | 40.83 | 9800 | 0.5044 | 0.7638 | 0.764 |
| 0.4652 | 41.67 | 10000 | 0.5048 | 0.7638 | 0.764 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_tf_1-seqsight_65536_512_47M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_tf_1-seqsight_65536_512_47M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_65536_512_47M",
"region:us"
] | null | 2024-05-03T16:58:21+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us
| GUE\_tf\_1-seqsight\_65536\_512\_47M-L1\_f
==========================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_65536\_512\_47M on the mahdibaghbanzadeh/GUE\_tf\_1 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3515
* F1 Score: 0.8466
* Accuracy: 0.847
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |