pipeline_tag
stringclasses 48
values | library_name
stringclasses 198
values | text
stringlengths 1
900k
| metadata
stringlengths 2
438k
| id
stringlengths 5
122
| last_modified
null | tags
sequencelengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
| arxiv
sequencelengths 0
201
| languages
sequencelengths 0
1.83k
| tags_str
stringlengths 17
9.34k
| text_str
stringlengths 0
389k
| text_lists
sequencelengths 0
722
| processed_texts
sequencelengths 1
723
| tokens_length
sequencelengths 1
723
| input_texts
sequencelengths 1
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | Pruz0/Len2GPT | null | [
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-03T18:02:18+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #pytorch #safetensors #gpt2 #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #pytorch #safetensors #gpt2 #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
53,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#transformers #pytorch #safetensors #gpt2 #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base_bedtype
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0771
- Accuracy: 0.7143
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 27
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.6124 | 1.67 | 10 | 1.6887 | 0.2857 |
| 1.1698 | 3.33 | 20 | 1.6470 | 0.2571 |
| 0.9559 | 5.0 | 30 | 1.3456 | 0.5429 |
| 0.765 | 6.67 | 40 | 1.3860 | 0.4571 |
| 0.5258 | 8.33 | 50 | 1.0184 | 0.5714 |
| 0.2822 | 10.0 | 60 | 0.7994 | 0.6571 |
| 0.1576 | 11.67 | 70 | 0.8624 | 0.7429 |
| 0.0648 | 13.33 | 80 | 0.6489 | 0.8 |
| 0.0483 | 15.0 | 90 | 0.7762 | 0.7429 |
| 0.0148 | 16.67 | 100 | 0.7915 | 0.7714 |
| 0.0086 | 18.33 | 110 | 1.0589 | 0.7429 |
| 0.0058 | 20.0 | 120 | 0.8353 | 0.8 |
| 0.005 | 21.67 | 130 | 0.8960 | 0.8 |
| 0.0042 | 23.33 | 140 | 0.9228 | 0.7429 |
| 0.0036 | 25.0 | 150 | 0.9461 | 0.7429 |
| 0.0032 | 26.67 | 160 | 0.9904 | 0.7714 |
| 0.003 | 28.33 | 170 | 0.9968 | 0.7429 |
| 0.0029 | 30.0 | 180 | 0.9978 | 0.7429 |
| 0.0026 | 31.67 | 190 | 1.0043 | 0.7429 |
| 0.0029 | 33.33 | 200 | 1.0712 | 0.7143 |
| 0.0423 | 35.0 | 210 | 1.0914 | 0.7429 |
| 0.0187 | 36.67 | 220 | 0.9988 | 0.7714 |
| 0.0025 | 38.33 | 230 | 1.0863 | 0.7143 |
| 0.0023 | 40.0 | 240 | 1.1078 | 0.7143 |
| 0.0022 | 41.67 | 250 | 1.1058 | 0.7143 |
| 0.0023 | 43.33 | 260 | 1.0903 | 0.7143 |
| 0.0021 | 45.0 | 270 | 1.0833 | 0.7143 |
| 0.0021 | 46.67 | 280 | 1.0805 | 0.7143 |
| 0.0021 | 48.33 | 290 | 1.0777 | 0.7143 |
| 0.0021 | 50.0 | 300 | 1.0771 | 0.7143 |
### Framework versions
- Transformers 4.39.1
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "roberta-base", "model-index": [{"name": "roberta-base_bedtype", "results": []}]} | JBhug/roberta-base_bedtype | null | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T18:03:58+00:00 | [] | [] | TAGS
#transformers #safetensors #roberta #text-classification #generated_from_trainer #base_model-roberta-base #license-mit #autotrain_compatible #endpoints_compatible #region-us
| roberta-base\_bedtype
=====================
This model is a fine-tuned version of roberta-base on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 1.0771
* Accuracy: 0.7143
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 32
* eval\_batch\_size: 32
* seed: 27
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 50
### Training results
### Framework versions
* Transformers 4.39.1
* Pytorch 2.2.1+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 27\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 50",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #safetensors #roberta #text-classification #generated_from_trainer #base_model-roberta-base #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 27\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 50",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
45,
101,
5,
44
] | [
"TAGS\n#transformers #safetensors #roberta #text-classification #generated_from_trainer #base_model-roberta-base #license-mit #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 27\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 50### Training results### Framework versions\n\n\n* Transformers 4.39.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | golf2248/qkuzzuz | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-03T18:04:11+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
47,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | abc88767/model57 | null | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T18:05:08+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
41,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H4-seqsight_4096_512_15M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_15M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_15M) on the [mahdibaghbanzadeh/GUE_EMP_H4](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H4) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2686
- F1 Score: 0.8975
- Accuracy: 0.8973
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.4376 | 2.17 | 200 | 0.3131 | 0.8777 | 0.8775 |
| 0.3136 | 4.35 | 400 | 0.3095 | 0.8718 | 0.8713 |
| 0.2965 | 6.52 | 600 | 0.2957 | 0.8853 | 0.8850 |
| 0.2931 | 8.7 | 800 | 0.2967 | 0.8853 | 0.8850 |
| 0.2855 | 10.87 | 1000 | 0.2915 | 0.8921 | 0.8919 |
| 0.279 | 13.04 | 1200 | 0.2921 | 0.8921 | 0.8919 |
| 0.2774 | 15.22 | 1400 | 0.2821 | 0.8941 | 0.8939 |
| 0.2731 | 17.39 | 1600 | 0.2847 | 0.8935 | 0.8932 |
| 0.2726 | 19.57 | 1800 | 0.2780 | 0.8891 | 0.8891 |
| 0.2711 | 21.74 | 2000 | 0.2868 | 0.8908 | 0.8905 |
| 0.2622 | 23.91 | 2200 | 0.2883 | 0.8949 | 0.8946 |
| 0.2659 | 26.09 | 2400 | 0.2896 | 0.8908 | 0.8905 |
| 0.2646 | 28.26 | 2600 | 0.2777 | 0.8928 | 0.8925 |
| 0.2609 | 30.43 | 2800 | 0.2813 | 0.8942 | 0.8939 |
| 0.2605 | 32.61 | 3000 | 0.2823 | 0.8963 | 0.8960 |
| 0.2589 | 34.78 | 3200 | 0.2771 | 0.8990 | 0.8987 |
| 0.2556 | 36.96 | 3400 | 0.2850 | 0.8942 | 0.8939 |
| 0.2551 | 39.13 | 3600 | 0.2833 | 0.8970 | 0.8966 |
| 0.2518 | 41.3 | 3800 | 0.2831 | 0.8963 | 0.8960 |
| 0.2515 | 43.48 | 4000 | 0.2820 | 0.8963 | 0.8960 |
| 0.255 | 45.65 | 4200 | 0.2756 | 0.8969 | 0.8966 |
| 0.248 | 47.83 | 4400 | 0.2804 | 0.8956 | 0.8953 |
| 0.2508 | 50.0 | 4600 | 0.2736 | 0.8969 | 0.8966 |
| 0.2465 | 52.17 | 4800 | 0.2755 | 0.8942 | 0.8939 |
| 0.2485 | 54.35 | 5000 | 0.2929 | 0.8855 | 0.8850 |
| 0.2436 | 56.52 | 5200 | 0.2922 | 0.8855 | 0.8850 |
| 0.2435 | 58.7 | 5400 | 0.2806 | 0.8881 | 0.8877 |
| 0.2428 | 60.87 | 5600 | 0.2852 | 0.8888 | 0.8884 |
| 0.2452 | 63.04 | 5800 | 0.2816 | 0.8881 | 0.8877 |
| 0.2428 | 65.22 | 6000 | 0.2777 | 0.8915 | 0.8912 |
| 0.242 | 67.39 | 6200 | 0.2873 | 0.8841 | 0.8836 |
| 0.2387 | 69.57 | 6400 | 0.2821 | 0.8861 | 0.8857 |
| 0.2393 | 71.74 | 6600 | 0.2894 | 0.8854 | 0.8850 |
| 0.2409 | 73.91 | 6800 | 0.2829 | 0.8888 | 0.8884 |
| 0.236 | 76.09 | 7000 | 0.2819 | 0.8881 | 0.8877 |
| 0.2364 | 78.26 | 7200 | 0.2800 | 0.8908 | 0.8905 |
| 0.2363 | 80.43 | 7400 | 0.2764 | 0.8915 | 0.8912 |
| 0.2361 | 82.61 | 7600 | 0.2743 | 0.8935 | 0.8932 |
| 0.2391 | 84.78 | 7800 | 0.2879 | 0.8875 | 0.8871 |
| 0.2357 | 86.96 | 8000 | 0.2767 | 0.8922 | 0.8919 |
| 0.237 | 89.13 | 8200 | 0.2769 | 0.8915 | 0.8912 |
| 0.2348 | 91.3 | 8400 | 0.2780 | 0.8915 | 0.8912 |
| 0.2338 | 93.48 | 8600 | 0.2758 | 0.8935 | 0.8932 |
| 0.2339 | 95.65 | 8800 | 0.2753 | 0.8935 | 0.8932 |
| 0.2358 | 97.83 | 9000 | 0.2763 | 0.8929 | 0.8925 |
| 0.2353 | 100.0 | 9200 | 0.2829 | 0.8888 | 0.8884 |
| 0.2349 | 102.17 | 9400 | 0.2784 | 0.8915 | 0.8912 |
| 0.2346 | 104.35 | 9600 | 0.2794 | 0.8908 | 0.8905 |
| 0.2327 | 106.52 | 9800 | 0.2779 | 0.8929 | 0.8925 |
| 0.2332 | 108.7 | 10000 | 0.2783 | 0.8929 | 0.8925 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_15M", "model-index": [{"name": "GUE_EMP_H4-seqsight_4096_512_15M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H4-seqsight_4096_512_15M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_15M",
"region:us"
] | null | 2024-05-03T18:05:36+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us
| GUE\_EMP\_H4-seqsight\_4096\_512\_15M-L1\_f
===========================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_15M on the mahdibaghbanzadeh/GUE\_EMP\_H4 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2686
* F1 Score: 0.8975
* Accuracy: 0.8973
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
43,
100,
5,
52
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H4-seqsight_4096_512_15M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_15M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_15M) on the [mahdibaghbanzadeh/GUE_EMP_H4](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H4) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2787
- F1 Score: 0.9048
- Accuracy: 0.9049
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.3874 | 2.17 | 200 | 0.2947 | 0.8811 | 0.8809 |
| 0.2921 | 4.35 | 400 | 0.2955 | 0.8888 | 0.8884 |
| 0.2805 | 6.52 | 600 | 0.2811 | 0.8927 | 0.8925 |
| 0.2752 | 8.7 | 800 | 0.2907 | 0.8888 | 0.8884 |
| 0.2682 | 10.87 | 1000 | 0.2885 | 0.8874 | 0.8871 |
| 0.2612 | 13.04 | 1200 | 0.2793 | 0.8962 | 0.8960 |
| 0.2565 | 15.22 | 1400 | 0.2839 | 0.8902 | 0.8898 |
| 0.2495 | 17.39 | 1600 | 0.2838 | 0.8949 | 0.8946 |
| 0.2489 | 19.57 | 1800 | 0.2698 | 0.9030 | 0.9028 |
| 0.2432 | 21.74 | 2000 | 0.2757 | 0.8942 | 0.8939 |
| 0.2346 | 23.91 | 2200 | 0.2817 | 0.8929 | 0.8925 |
| 0.234 | 26.09 | 2400 | 0.2747 | 0.8935 | 0.8932 |
| 0.2307 | 28.26 | 2600 | 0.2722 | 0.9043 | 0.9042 |
| 0.2248 | 30.43 | 2800 | 0.2782 | 0.8874 | 0.8871 |
| 0.224 | 32.61 | 3000 | 0.2796 | 0.8989 | 0.8987 |
| 0.2198 | 34.78 | 3200 | 0.2762 | 0.9017 | 0.9014 |
| 0.2167 | 36.96 | 3400 | 0.2808 | 0.8983 | 0.8980 |
| 0.2139 | 39.13 | 3600 | 0.2760 | 0.8982 | 0.8980 |
| 0.2062 | 41.3 | 3800 | 0.2803 | 0.8983 | 0.8980 |
| 0.2082 | 43.48 | 4000 | 0.2833 | 0.8976 | 0.8973 |
| 0.2089 | 45.65 | 4200 | 0.2781 | 0.9030 | 0.9028 |
| 0.2016 | 47.83 | 4400 | 0.2828 | 0.9056 | 0.9055 |
| 0.2043 | 50.0 | 4600 | 0.2781 | 0.8962 | 0.8960 |
| 0.1996 | 52.17 | 4800 | 0.2825 | 0.8969 | 0.8966 |
| 0.1967 | 54.35 | 5000 | 0.2907 | 0.8977 | 0.8973 |
| 0.1945 | 56.52 | 5200 | 0.3027 | 0.8841 | 0.8836 |
| 0.1929 | 58.7 | 5400 | 0.2808 | 0.8990 | 0.8987 |
| 0.1909 | 60.87 | 5600 | 0.2814 | 0.8990 | 0.8987 |
| 0.1904 | 63.04 | 5800 | 0.2880 | 0.8915 | 0.8912 |
| 0.19 | 65.22 | 6000 | 0.2885 | 0.8970 | 0.8966 |
| 0.187 | 67.39 | 6200 | 0.2886 | 0.8963 | 0.8960 |
| 0.1834 | 69.57 | 6400 | 0.2960 | 0.8922 | 0.8919 |
| 0.1855 | 71.74 | 6600 | 0.2953 | 0.8983 | 0.8980 |
| 0.1826 | 73.91 | 6800 | 0.2914 | 0.8956 | 0.8953 |
| 0.1796 | 76.09 | 7000 | 0.2983 | 0.8929 | 0.8925 |
| 0.1791 | 78.26 | 7200 | 0.2885 | 0.9010 | 0.9008 |
| 0.1798 | 80.43 | 7400 | 0.2929 | 0.8963 | 0.8960 |
| 0.178 | 82.61 | 7600 | 0.2925 | 0.9044 | 0.9042 |
| 0.1767 | 84.78 | 7800 | 0.3021 | 0.8909 | 0.8905 |
| 0.1783 | 86.96 | 8000 | 0.2919 | 0.8989 | 0.8987 |
| 0.1754 | 89.13 | 8200 | 0.2978 | 0.8949 | 0.8946 |
| 0.1753 | 91.3 | 8400 | 0.2952 | 0.8942 | 0.8939 |
| 0.1735 | 93.48 | 8600 | 0.2969 | 0.8996 | 0.8994 |
| 0.1745 | 95.65 | 8800 | 0.2928 | 0.9037 | 0.9035 |
| 0.1725 | 97.83 | 9000 | 0.2929 | 0.9030 | 0.9028 |
| 0.1732 | 100.0 | 9200 | 0.3001 | 0.8915 | 0.8912 |
| 0.1718 | 102.17 | 9400 | 0.2951 | 0.9003 | 0.9001 |
| 0.1729 | 104.35 | 9600 | 0.2937 | 0.8996 | 0.8994 |
| 0.1718 | 106.52 | 9800 | 0.2948 | 0.8996 | 0.8994 |
| 0.1713 | 108.7 | 10000 | 0.2957 | 0.8996 | 0.8994 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_15M", "model-index": [{"name": "GUE_EMP_H4-seqsight_4096_512_15M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H4-seqsight_4096_512_15M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_15M",
"region:us"
] | null | 2024-05-03T18:05:39+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us
| GUE\_EMP\_H4-seqsight\_4096\_512\_15M-L8\_f
===========================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_15M on the mahdibaghbanzadeh/GUE\_EMP\_H4 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2787
* F1 Score: 0.9048
* Accuracy: 0.9049
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
43,
100,
5,
52
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | ferrazzipietro/LS_Llama-2-7b-hf_adapters_en.layer1_NoQuant_16_64_0.05_4_5e-05 | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T18:06:11+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
26,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3-seqsight_4096_512_15M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_15M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_15M) on the [mahdibaghbanzadeh/GUE_EMP_H3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3157
- F1 Score: 0.8771
- Accuracy: 0.8771
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.5141 | 2.13 | 200 | 0.4359 | 0.7886 | 0.7902 |
| 0.3998 | 4.26 | 400 | 0.3945 | 0.8303 | 0.8303 |
| 0.3759 | 6.38 | 600 | 0.3907 | 0.8321 | 0.8323 |
| 0.3625 | 8.51 | 800 | 0.3761 | 0.8377 | 0.8377 |
| 0.3542 | 10.64 | 1000 | 0.3715 | 0.8382 | 0.8383 |
| 0.3365 | 12.77 | 1200 | 0.3685 | 0.8462 | 0.8464 |
| 0.3269 | 14.89 | 1400 | 0.3503 | 0.8470 | 0.8470 |
| 0.3101 | 17.02 | 1600 | 0.3486 | 0.8503 | 0.8504 |
| 0.3011 | 19.15 | 1800 | 0.3602 | 0.8527 | 0.8530 |
| 0.2962 | 21.28 | 2000 | 0.3519 | 0.8593 | 0.8597 |
| 0.2926 | 23.4 | 2200 | 0.3292 | 0.8570 | 0.8570 |
| 0.284 | 25.53 | 2400 | 0.3457 | 0.8575 | 0.8577 |
| 0.2853 | 27.66 | 2600 | 0.3263 | 0.8590 | 0.8591 |
| 0.284 | 29.79 | 2800 | 0.3354 | 0.8582 | 0.8584 |
| 0.2801 | 31.91 | 3000 | 0.3358 | 0.8622 | 0.8624 |
| 0.2764 | 34.04 | 3200 | 0.3264 | 0.8656 | 0.8657 |
| 0.2755 | 36.17 | 3400 | 0.3383 | 0.8622 | 0.8624 |
| 0.2724 | 38.3 | 3600 | 0.3252 | 0.8663 | 0.8664 |
| 0.2728 | 40.43 | 3800 | 0.3269 | 0.8676 | 0.8677 |
| 0.2718 | 42.55 | 4000 | 0.3340 | 0.8676 | 0.8677 |
| 0.2695 | 44.68 | 4200 | 0.3235 | 0.8657 | 0.8657 |
| 0.2667 | 46.81 | 4400 | 0.3381 | 0.8655 | 0.8657 |
| 0.2664 | 48.94 | 4600 | 0.3301 | 0.8689 | 0.8691 |
| 0.2654 | 51.06 | 4800 | 0.3266 | 0.8697 | 0.8697 |
| 0.2641 | 53.19 | 5000 | 0.3489 | 0.8607 | 0.8611 |
| 0.2677 | 55.32 | 5200 | 0.3200 | 0.8677 | 0.8677 |
| 0.2607 | 57.45 | 5400 | 0.3333 | 0.8683 | 0.8684 |
| 0.2581 | 59.57 | 5600 | 0.3247 | 0.8690 | 0.8691 |
| 0.2612 | 61.7 | 5800 | 0.3181 | 0.8677 | 0.8677 |
| 0.2565 | 63.83 | 6000 | 0.3443 | 0.8682 | 0.8684 |
| 0.2626 | 65.96 | 6200 | 0.3354 | 0.8676 | 0.8677 |
| 0.2579 | 68.09 | 6400 | 0.3338 | 0.8663 | 0.8664 |
| 0.2546 | 70.21 | 6600 | 0.3380 | 0.8683 | 0.8684 |
| 0.2562 | 72.34 | 6800 | 0.3305 | 0.8683 | 0.8684 |
| 0.2545 | 74.47 | 7000 | 0.3513 | 0.8600 | 0.8604 |
| 0.2507 | 76.6 | 7200 | 0.3392 | 0.8670 | 0.8671 |
| 0.2568 | 78.72 | 7400 | 0.3329 | 0.8676 | 0.8677 |
| 0.2515 | 80.85 | 7600 | 0.3307 | 0.8670 | 0.8671 |
| 0.2494 | 82.98 | 7800 | 0.3388 | 0.8649 | 0.8651 |
| 0.2537 | 85.11 | 8000 | 0.3287 | 0.8663 | 0.8664 |
| 0.2535 | 87.23 | 8200 | 0.3317 | 0.8663 | 0.8664 |
| 0.25 | 89.36 | 8400 | 0.3356 | 0.8663 | 0.8664 |
| 0.2493 | 91.49 | 8600 | 0.3397 | 0.8649 | 0.8651 |
| 0.2504 | 93.62 | 8800 | 0.3360 | 0.8656 | 0.8657 |
| 0.2532 | 95.74 | 9000 | 0.3389 | 0.8649 | 0.8651 |
| 0.2502 | 97.87 | 9200 | 0.3313 | 0.8676 | 0.8677 |
| 0.2497 | 100.0 | 9400 | 0.3302 | 0.8683 | 0.8684 |
| 0.2516 | 102.13 | 9600 | 0.3324 | 0.8676 | 0.8677 |
| 0.2499 | 104.26 | 9800 | 0.3338 | 0.8663 | 0.8664 |
| 0.2503 | 106.38 | 10000 | 0.3336 | 0.8663 | 0.8664 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_15M", "model-index": [{"name": "GUE_EMP_H3-seqsight_4096_512_15M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3-seqsight_4096_512_15M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_15M",
"region:us"
] | null | 2024-05-03T18:06:35+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us
| GUE\_EMP\_H3-seqsight\_4096\_512\_15M-L1\_f
===========================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_15M on the mahdibaghbanzadeh/GUE\_EMP\_H3 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3157
* F1 Score: 0.8771
* Accuracy: 0.8771
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
43,
100,
5,
52
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3-seqsight_4096_512_15M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_15M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_15M) on the [mahdibaghbanzadeh/GUE_EMP_H3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3240
- F1 Score: 0.8737
- Accuracy: 0.8737
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.4577 | 2.13 | 200 | 0.3877 | 0.8286 | 0.8290 |
| 0.3469 | 4.26 | 400 | 0.3609 | 0.8490 | 0.8490 |
| 0.3103 | 6.38 | 600 | 0.3633 | 0.8493 | 0.8497 |
| 0.2955 | 8.51 | 800 | 0.3413 | 0.8589 | 0.8591 |
| 0.2851 | 10.64 | 1000 | 0.3289 | 0.8663 | 0.8664 |
| 0.2766 | 12.77 | 1200 | 0.3486 | 0.8540 | 0.8544 |
| 0.2727 | 14.89 | 1400 | 0.3322 | 0.8649 | 0.8651 |
| 0.2675 | 17.02 | 1600 | 0.3172 | 0.8731 | 0.8731 |
| 0.2608 | 19.15 | 1800 | 0.3377 | 0.8601 | 0.8604 |
| 0.2563 | 21.28 | 2000 | 0.3345 | 0.8643 | 0.8644 |
| 0.2561 | 23.4 | 2200 | 0.3228 | 0.8751 | 0.8751 |
| 0.2447 | 25.53 | 2400 | 0.3772 | 0.8566 | 0.8570 |
| 0.2497 | 27.66 | 2600 | 0.3324 | 0.8663 | 0.8664 |
| 0.2466 | 29.79 | 2800 | 0.3237 | 0.8744 | 0.8744 |
| 0.2406 | 31.91 | 3000 | 0.3300 | 0.8730 | 0.8731 |
| 0.2368 | 34.04 | 3200 | 0.3247 | 0.8737 | 0.8737 |
| 0.2362 | 36.17 | 3400 | 0.3424 | 0.8696 | 0.8697 |
| 0.2321 | 38.3 | 3600 | 0.3380 | 0.8730 | 0.8731 |
| 0.2306 | 40.43 | 3800 | 0.3357 | 0.8750 | 0.8751 |
| 0.2297 | 42.55 | 4000 | 0.3661 | 0.8669 | 0.8671 |
| 0.2267 | 44.68 | 4200 | 0.3540 | 0.8730 | 0.8731 |
| 0.2239 | 46.81 | 4400 | 0.3621 | 0.8722 | 0.8724 |
| 0.2206 | 48.94 | 4600 | 0.3667 | 0.8668 | 0.8671 |
| 0.2184 | 51.06 | 4800 | 0.3429 | 0.8751 | 0.8751 |
| 0.2186 | 53.19 | 5000 | 0.3682 | 0.8695 | 0.8697 |
| 0.2168 | 55.32 | 5200 | 0.3536 | 0.8717 | 0.8717 |
| 0.2109 | 57.45 | 5400 | 0.3656 | 0.8696 | 0.8697 |
| 0.2103 | 59.57 | 5600 | 0.3503 | 0.8764 | 0.8764 |
| 0.2118 | 61.7 | 5800 | 0.3421 | 0.8784 | 0.8784 |
| 0.2059 | 63.83 | 6000 | 0.3849 | 0.8716 | 0.8717 |
| 0.2099 | 65.96 | 6200 | 0.3856 | 0.8654 | 0.8657 |
| 0.2062 | 68.09 | 6400 | 0.3654 | 0.8737 | 0.8737 |
| 0.2016 | 70.21 | 6600 | 0.3694 | 0.8730 | 0.8731 |
| 0.2023 | 72.34 | 6800 | 0.3634 | 0.8770 | 0.8771 |
| 0.2009 | 74.47 | 7000 | 0.4164 | 0.8619 | 0.8624 |
| 0.1965 | 76.6 | 7200 | 0.3775 | 0.8696 | 0.8697 |
| 0.2008 | 78.72 | 7400 | 0.3728 | 0.8730 | 0.8731 |
| 0.1949 | 80.85 | 7600 | 0.3727 | 0.8723 | 0.8724 |
| 0.1923 | 82.98 | 7800 | 0.3806 | 0.8722 | 0.8724 |
| 0.195 | 85.11 | 8000 | 0.3777 | 0.8702 | 0.8704 |
| 0.1949 | 87.23 | 8200 | 0.3796 | 0.8736 | 0.8737 |
| 0.1924 | 89.36 | 8400 | 0.3854 | 0.8709 | 0.8711 |
| 0.1899 | 91.49 | 8600 | 0.4060 | 0.8688 | 0.8691 |
| 0.1873 | 93.62 | 8800 | 0.3931 | 0.8722 | 0.8724 |
| 0.1945 | 95.74 | 9000 | 0.3907 | 0.8716 | 0.8717 |
| 0.1907 | 97.87 | 9200 | 0.3828 | 0.8716 | 0.8717 |
| 0.1898 | 100.0 | 9400 | 0.3809 | 0.8737 | 0.8737 |
| 0.1906 | 102.13 | 9600 | 0.3861 | 0.8729 | 0.8731 |
| 0.1884 | 104.26 | 9800 | 0.3847 | 0.8716 | 0.8717 |
| 0.1891 | 106.38 | 10000 | 0.3846 | 0.8710 | 0.8711 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_15M", "model-index": [{"name": "GUE_EMP_H3-seqsight_4096_512_15M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3-seqsight_4096_512_15M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_15M",
"region:us"
] | null | 2024-05-03T18:06:45+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us
| GUE\_EMP\_H3-seqsight\_4096\_512\_15M-L8\_f
===========================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_15M on the mahdibaghbanzadeh/GUE\_EMP\_H3 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3240
* F1 Score: 0.8737
* Accuracy: 0.8737
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
43,
100,
5,
52
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H4-seqsight_4096_512_15M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_15M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_15M) on the [mahdibaghbanzadeh/GUE_EMP_H4](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H4) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2642
- F1 Score: 0.9023
- Accuracy: 0.9021
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.3648 | 2.17 | 200 | 0.2892 | 0.8852 | 0.8850 |
| 0.2836 | 4.35 | 400 | 0.2803 | 0.8954 | 0.8953 |
| 0.2716 | 6.52 | 600 | 0.2752 | 0.9001 | 0.9001 |
| 0.2644 | 8.7 | 800 | 0.2867 | 0.8874 | 0.8871 |
| 0.2542 | 10.87 | 1000 | 0.2844 | 0.8936 | 0.8932 |
| 0.2455 | 13.04 | 1200 | 0.2891 | 0.8875 | 0.8871 |
| 0.2348 | 15.22 | 1400 | 0.2901 | 0.8889 | 0.8884 |
| 0.226 | 17.39 | 1600 | 0.2846 | 0.8949 | 0.8946 |
| 0.2204 | 19.57 | 1800 | 0.2804 | 0.9037 | 0.9035 |
| 0.2127 | 21.74 | 2000 | 0.2843 | 0.8963 | 0.8960 |
| 0.1998 | 23.91 | 2200 | 0.2911 | 0.8935 | 0.8932 |
| 0.1956 | 26.09 | 2400 | 0.2842 | 0.8982 | 0.8980 |
| 0.1909 | 28.26 | 2600 | 0.2996 | 0.9016 | 0.9014 |
| 0.1785 | 30.43 | 2800 | 0.3167 | 0.8847 | 0.8843 |
| 0.1761 | 32.61 | 3000 | 0.3012 | 0.8974 | 0.8973 |
| 0.1675 | 34.78 | 3200 | 0.3298 | 0.8820 | 0.8816 |
| 0.158 | 36.96 | 3400 | 0.3107 | 0.8934 | 0.8932 |
| 0.1546 | 39.13 | 3600 | 0.3235 | 0.8893 | 0.8891 |
| 0.1427 | 41.3 | 3800 | 0.3383 | 0.8827 | 0.8823 |
| 0.1421 | 43.48 | 4000 | 0.3797 | 0.8739 | 0.8734 |
| 0.1371 | 45.65 | 4200 | 0.3425 | 0.8907 | 0.8905 |
| 0.1291 | 47.83 | 4400 | 0.3497 | 0.8819 | 0.8816 |
| 0.1265 | 50.0 | 4600 | 0.3615 | 0.8866 | 0.8864 |
| 0.1208 | 52.17 | 4800 | 0.3905 | 0.8752 | 0.8747 |
| 0.1148 | 54.35 | 5000 | 0.3784 | 0.8839 | 0.8836 |
| 0.115 | 56.52 | 5200 | 0.3857 | 0.8732 | 0.8727 |
| 0.1046 | 58.7 | 5400 | 0.3877 | 0.8786 | 0.8782 |
| 0.1019 | 60.87 | 5600 | 0.3844 | 0.8772 | 0.8768 |
| 0.1023 | 63.04 | 5800 | 0.3925 | 0.8790 | 0.8789 |
| 0.0957 | 65.22 | 6000 | 0.4220 | 0.8797 | 0.8795 |
| 0.095 | 67.39 | 6200 | 0.4142 | 0.8765 | 0.8761 |
| 0.0902 | 69.57 | 6400 | 0.4613 | 0.8739 | 0.8734 |
| 0.0899 | 71.74 | 6600 | 0.4272 | 0.8812 | 0.8809 |
| 0.0829 | 73.91 | 6800 | 0.4464 | 0.8690 | 0.8686 |
| 0.0841 | 76.09 | 7000 | 0.4438 | 0.8764 | 0.8761 |
| 0.0817 | 78.26 | 7200 | 0.4425 | 0.8751 | 0.8747 |
| 0.0782 | 80.43 | 7400 | 0.4576 | 0.8785 | 0.8782 |
| 0.0765 | 82.61 | 7600 | 0.4707 | 0.8744 | 0.8741 |
| 0.0689 | 84.78 | 7800 | 0.5113 | 0.8704 | 0.8700 |
| 0.0742 | 86.96 | 8000 | 0.4925 | 0.8738 | 0.8734 |
| 0.0721 | 89.13 | 8200 | 0.4897 | 0.8758 | 0.8754 |
| 0.0746 | 91.3 | 8400 | 0.4792 | 0.8717 | 0.8713 |
| 0.0713 | 93.48 | 8600 | 0.4866 | 0.8738 | 0.8734 |
| 0.0695 | 95.65 | 8800 | 0.4749 | 0.8791 | 0.8789 |
| 0.0676 | 97.83 | 9000 | 0.4816 | 0.8757 | 0.8754 |
| 0.0684 | 100.0 | 9200 | 0.5062 | 0.8725 | 0.8720 |
| 0.0661 | 102.17 | 9400 | 0.4917 | 0.8730 | 0.8727 |
| 0.0642 | 104.35 | 9600 | 0.4921 | 0.8751 | 0.8747 |
| 0.0628 | 106.52 | 9800 | 0.4966 | 0.8751 | 0.8747 |
| 0.062 | 108.7 | 10000 | 0.5035 | 0.8744 | 0.8741 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_15M", "model-index": [{"name": "GUE_EMP_H4-seqsight_4096_512_15M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H4-seqsight_4096_512_15M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_15M",
"region:us"
] | null | 2024-05-03T18:06:50+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us
| GUE\_EMP\_H4-seqsight\_4096\_512\_15M-L32\_f
============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_15M on the mahdibaghbanzadeh/GUE\_EMP\_H4 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2642
* F1 Score: 0.9023
* Accuracy: 0.9021
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
43,
100,
5,
52
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3-seqsight_4096_512_15M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_15M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_15M) on the [mahdibaghbanzadeh/GUE_EMP_H3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3044
- F1 Score: 0.8784
- Accuracy: 0.8784
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.4336 | 2.13 | 200 | 0.3696 | 0.8395 | 0.8397 |
| 0.3178 | 4.26 | 400 | 0.3452 | 0.8523 | 0.8524 |
| 0.29 | 6.38 | 600 | 0.3346 | 0.8630 | 0.8631 |
| 0.2779 | 8.51 | 800 | 0.3198 | 0.8663 | 0.8664 |
| 0.2676 | 10.64 | 1000 | 0.3190 | 0.8677 | 0.8677 |
| 0.2577 | 12.77 | 1200 | 0.3414 | 0.8648 | 0.8651 |
| 0.2524 | 14.89 | 1400 | 0.3582 | 0.8579 | 0.8584 |
| 0.243 | 17.02 | 1600 | 0.3247 | 0.8818 | 0.8818 |
| 0.2359 | 19.15 | 1800 | 0.3461 | 0.8690 | 0.8691 |
| 0.2289 | 21.28 | 2000 | 0.3568 | 0.8716 | 0.8717 |
| 0.2244 | 23.4 | 2200 | 0.3518 | 0.8750 | 0.8751 |
| 0.2135 | 25.53 | 2400 | 0.3840 | 0.8695 | 0.8697 |
| 0.2123 | 27.66 | 2600 | 0.3592 | 0.8615 | 0.8617 |
| 0.2053 | 29.79 | 2800 | 0.3740 | 0.8620 | 0.8624 |
| 0.1971 | 31.91 | 3000 | 0.3504 | 0.8750 | 0.8751 |
| 0.1886 | 34.04 | 3200 | 0.3509 | 0.8777 | 0.8778 |
| 0.1823 | 36.17 | 3400 | 0.3787 | 0.8742 | 0.8744 |
| 0.1778 | 38.3 | 3600 | 0.3780 | 0.8764 | 0.8764 |
| 0.1707 | 40.43 | 3800 | 0.4036 | 0.8756 | 0.8758 |
| 0.1662 | 42.55 | 4000 | 0.4630 | 0.8688 | 0.8691 |
| 0.1604 | 44.68 | 4200 | 0.4225 | 0.8762 | 0.8764 |
| 0.1532 | 46.81 | 4400 | 0.4284 | 0.8689 | 0.8691 |
| 0.1461 | 48.94 | 4600 | 0.4591 | 0.8762 | 0.8764 |
| 0.1439 | 51.06 | 4800 | 0.4378 | 0.8703 | 0.8704 |
| 0.1377 | 53.19 | 5000 | 0.4688 | 0.8715 | 0.8717 |
| 0.1358 | 55.32 | 5200 | 0.4661 | 0.8689 | 0.8691 |
| 0.1266 | 57.45 | 5400 | 0.4683 | 0.8743 | 0.8744 |
| 0.1231 | 59.57 | 5600 | 0.4519 | 0.8717 | 0.8717 |
| 0.1247 | 61.7 | 5800 | 0.4724 | 0.8703 | 0.8704 |
| 0.1192 | 63.83 | 6000 | 0.4984 | 0.8662 | 0.8664 |
| 0.1186 | 65.96 | 6200 | 0.5069 | 0.8668 | 0.8671 |
| 0.1129 | 68.09 | 6400 | 0.4910 | 0.8723 | 0.8724 |
| 0.1093 | 70.21 | 6600 | 0.5178 | 0.8682 | 0.8684 |
| 0.1083 | 72.34 | 6800 | 0.5222 | 0.8696 | 0.8697 |
| 0.1071 | 74.47 | 7000 | 0.5877 | 0.8619 | 0.8624 |
| 0.101 | 76.6 | 7200 | 0.5514 | 0.8628 | 0.8631 |
| 0.1024 | 78.72 | 7400 | 0.5411 | 0.8696 | 0.8697 |
| 0.0981 | 80.85 | 7600 | 0.5547 | 0.8642 | 0.8644 |
| 0.0952 | 82.98 | 7800 | 0.5654 | 0.8675 | 0.8677 |
| 0.0928 | 85.11 | 8000 | 0.5639 | 0.8648 | 0.8651 |
| 0.0945 | 87.23 | 8200 | 0.5726 | 0.8682 | 0.8684 |
| 0.0919 | 89.36 | 8400 | 0.5745 | 0.8717 | 0.8717 |
| 0.092 | 91.49 | 8600 | 0.6028 | 0.8641 | 0.8644 |
| 0.0908 | 93.62 | 8800 | 0.5774 | 0.8661 | 0.8664 |
| 0.0898 | 95.74 | 9000 | 0.5796 | 0.8669 | 0.8671 |
| 0.0902 | 97.87 | 9200 | 0.5734 | 0.8682 | 0.8684 |
| 0.0879 | 100.0 | 9400 | 0.5769 | 0.8655 | 0.8657 |
| 0.087 | 102.13 | 9600 | 0.5881 | 0.8628 | 0.8631 |
| 0.0821 | 104.26 | 9800 | 0.5910 | 0.8669 | 0.8671 |
| 0.0826 | 106.38 | 10000 | 0.5928 | 0.8689 | 0.8691 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_15M", "model-index": [{"name": "GUE_EMP_H3-seqsight_4096_512_15M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3-seqsight_4096_512_15M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_15M",
"region:us"
] | null | 2024-05-03T18:07:01+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us
| GUE\_EMP\_H3-seqsight\_4096\_512\_15M-L32\_f
============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_15M on the mahdibaghbanzadeh/GUE\_EMP\_H3 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3044
* F1 Score: 0.8784
* Accuracy: 0.8784
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
43,
100,
5,
52
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | golf2248/2glbiiy | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-03T18:09:00+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
47,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lora_fine_tuned_boolq_croslo
This model is a fine-tuned version of [EMBEDDIA/crosloengual-bert](https://huggingface.co/EMBEDDIA/crosloengual-bert) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5923
- Accuracy: 0.7778
- F1: 0.6806
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 400
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-------:|:----:|:---------------:|:--------:|:------:|
| 0.7055 | 4.1667 | 50 | 0.6182 | 0.7222 | 0.7325 |
| 0.6596 | 8.3333 | 100 | 0.5842 | 0.8333 | 0.8243 |
| 0.6565 | 12.5 | 150 | 0.5833 | 0.8333 | 0.8243 |
| 0.6642 | 16.6667 | 200 | 0.5852 | 0.7778 | 0.6806 |
| 0.6495 | 20.8333 | 250 | 0.5873 | 0.7778 | 0.6806 |
| 0.6477 | 25.0 | 300 | 0.5892 | 0.7778 | 0.6806 |
| 0.652 | 29.1667 | 350 | 0.5918 | 0.7778 | 0.6806 |
| 0.6362 | 33.3333 | 400 | 0.5923 | 0.7778 | 0.6806 |
### Framework versions
- PEFT 0.10.1.dev0
- Transformers 4.40.1
- Pytorch 2.3.0
- Datasets 2.19.0
- Tokenizers 0.19.1 | {"license": "cc-by-4.0", "library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "base_model": "EMBEDDIA/crosloengual-bert", "model-index": [{"name": "lora_fine_tuned_boolq_croslo", "results": []}]} | lenatr99/lora_fine_tuned_boolq_croslo | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:EMBEDDIA/crosloengual-bert",
"license:cc-by-4.0",
"region:us"
] | null | 2024-05-03T18:09:42+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-EMBEDDIA/crosloengual-bert #license-cc-by-4.0 #region-us
| lora\_fine\_tuned\_boolq\_croslo
================================
This model is a fine-tuned version of EMBEDDIA/crosloengual-bert on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5923
* Accuracy: 0.7778
* F1: 0.6806
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 400
### Training results
### Framework versions
* PEFT 0.10.1.dev0
* Transformers 4.40.1
* Pytorch 2.3.0
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 400",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.1.dev0\n* Transformers 4.40.1\n* Pytorch 2.3.0\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-EMBEDDIA/crosloengual-bert #license-cc-by-4.0 #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 400",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.1.dev0\n* Transformers 4.40.1\n* Pytorch 2.3.0\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
44,
99,
5,
51
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-EMBEDDIA/crosloengual-bert #license-cc-by-4.0 #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 400### Training results### Framework versions\n\n\n* PEFT 0.10.1.dev0\n* Transformers 4.40.1\n* Pytorch 2.3.0\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# loha_fine_tuned_boolq_croslo
This model is a fine-tuned version of [EMBEDDIA/crosloengual-bert](https://huggingface.co/EMBEDDIA/crosloengual-bert) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5904
- Accuracy: 0.8333
- F1: 0.8243
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 400
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-------:|:----:|:---------------:|:--------:|:------:|
| 0.7131 | 4.1667 | 50 | 0.6457 | 0.7222 | 0.7325 |
| 0.676 | 8.3333 | 100 | 0.6157 | 0.7222 | 0.7325 |
| 0.6777 | 12.5 | 150 | 0.6047 | 0.7778 | 0.7778 |
| 0.687 | 16.6667 | 200 | 0.5982 | 0.7778 | 0.7778 |
| 0.669 | 20.8333 | 250 | 0.5940 | 0.8333 | 0.8243 |
| 0.6743 | 25.0 | 300 | 0.5911 | 0.8333 | 0.8243 |
| 0.6841 | 29.1667 | 350 | 0.5910 | 0.8333 | 0.8243 |
| 0.6639 | 33.3333 | 400 | 0.5904 | 0.8333 | 0.8243 |
### Framework versions
- PEFT 0.10.1.dev0
- Transformers 4.40.1
- Pytorch 2.3.0
- Datasets 2.19.0
- Tokenizers 0.19.1 | {"license": "cc-by-4.0", "library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "base_model": "EMBEDDIA/crosloengual-bert", "model-index": [{"name": "loha_fine_tuned_boolq_croslo", "results": []}]} | lenatr99/loha_fine_tuned_boolq_croslo | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:EMBEDDIA/crosloengual-bert",
"license:cc-by-4.0",
"region:us"
] | null | 2024-05-03T18:10:14+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-EMBEDDIA/crosloengual-bert #license-cc-by-4.0 #region-us
| loha\_fine\_tuned\_boolq\_croslo
================================
This model is a fine-tuned version of EMBEDDIA/crosloengual-bert on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5904
* Accuracy: 0.8333
* F1: 0.8243
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 400
### Training results
### Framework versions
* PEFT 0.10.1.dev0
* Transformers 4.40.1
* Pytorch 2.3.0
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 400",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.1.dev0\n* Transformers 4.40.1\n* Pytorch 2.3.0\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-EMBEDDIA/crosloengual-bert #license-cc-by-4.0 #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 400",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.1.dev0\n* Transformers 4.40.1\n* Pytorch 2.3.0\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
44,
99,
5,
51
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-EMBEDDIA/crosloengual-bert #license-cc-by-4.0 #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 400### Training results### Framework versions\n\n\n* PEFT 0.10.1.dev0\n* Transformers 4.40.1\n* Pytorch 2.3.0\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine_tuned_boolq_croslo
This model is a fine-tuned version of [EMBEDDIA/crosloengual-bert](https://huggingface.co/EMBEDDIA/crosloengual-bert) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3270
- Accuracy: 0.8333
- F1: 0.8243
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 400
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-------:|:----:|:---------------:|:--------:|:------:|
| 0.5567 | 4.1667 | 50 | 0.5262 | 0.7222 | 0.6523 |
| 0.1098 | 8.3333 | 100 | 0.8949 | 0.8333 | 0.8243 |
| 0.0031 | 12.5 | 150 | 1.2236 | 0.7778 | 0.7778 |
| 0.0011 | 16.6667 | 200 | 1.2640 | 0.7778 | 0.7778 |
| 0.0008 | 20.8333 | 250 | 1.2343 | 0.8333 | 0.8243 |
| 0.0007 | 25.0 | 300 | 1.2852 | 0.8333 | 0.8243 |
| 0.0005 | 29.1667 | 350 | 1.3133 | 0.8333 | 0.8243 |
| 0.0005 | 33.3333 | 400 | 1.3270 | 0.8333 | 0.8243 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.3.0
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "cc-by-4.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "base_model": "EMBEDDIA/crosloengual-bert", "model-index": [{"name": "fine_tuned_boolq_croslo", "results": []}]} | lenatr99/fine_tuned_boolq_croslo | null | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:EMBEDDIA/crosloengual-bert",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T18:11:08+00:00 | [] | [] | TAGS
#transformers #safetensors #bert #text-classification #generated_from_trainer #base_model-EMBEDDIA/crosloengual-bert #license-cc-by-4.0 #autotrain_compatible #endpoints_compatible #region-us
| fine\_tuned\_boolq\_croslo
==========================
This model is a fine-tuned version of EMBEDDIA/crosloengual-bert on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 1.3270
* Accuracy: 0.8333
* F1: 0.8243
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 400
### Training results
### Framework versions
* Transformers 4.40.1
* Pytorch 2.3.0
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 400",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.3.0\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #safetensors #bert #text-classification #generated_from_trainer #base_model-EMBEDDIA/crosloengual-bert #license-cc-by-4.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 400",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.3.0\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
59,
99,
5,
40
] | [
"TAGS\n#transformers #safetensors #bert #text-classification #generated_from_trainer #base_model-EMBEDDIA/crosloengual-bert #license-cc-by-4.0 #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 400### Training results### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.3.0\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
null | null | import gradio as gr
import requests
# Function to fetch the current price of a cryptocurrency from CoinGecko
def get_crypto_price(coin_id):
# Define the endpoint and query parameters
endpoint = f"https://api.coingecko.com/api/v3/simple/price"
params = {
"ids": coin_id, # ID of the cryptocurrency
"vs_currencies": "usd", # Currency to compare with (USD)
}
# Make the GET request to CoinGecko API
response = requests.get(endpoint, params=params)
if response.status_code == 200:
# Get the JSON data from the response
data = response.json()
# Extract the price for the specified coin
price = data.get(coin_id, {}).get("usd", "N/A")
if price == "N/A":
return f"Couldn't find data for '{coin_id}'. Please check the ID and try again."
else:
return f"The current price of {coin_id} is ${price:.2f} USD."
else:
return f"Error fetching data from CoinGecko. Status code: {response.status_code}"
# Gradio interface function
def crypto_price_gradio(coin_id):
return get_crypto_price(coin_id)
# Create the Gradio interface
iface = gr.Interface(
fn=crypto_price_gradio,
inputs=gr.Textbox(label="Enter Cryptocurrency ID (e.g., bitcoin, ethereum)"),
outputs=gr.Textbox(label="Price in USD"),
title="Cryptocurrency Price Checker",
description="Enter a cryptocurrency ID to get its current price in USD."
)
# Launch the Gradio interface
iface.launch()
| {} | john3huggingface/com-trading-bot | null | [
"region:us"
] | null | 2024-05-03T18:11:33+00:00 | [] | [] | TAGS
#region-us
| import gradio as gr
import requests
# Function to fetch the current price of a cryptocurrency from CoinGecko
def get_crypto_price(coin_id):
# Define the endpoint and query parameters
endpoint = f"URL
params = {
"ids": coin_id, # ID of the cryptocurrency
"vs_currencies": "usd", # Currency to compare with (USD)
}
# Make the GET request to CoinGecko API
response = URL(endpoint, params=params)
if response.status_code == 200:
# Get the JSON data from the response
data = URL()
# Extract the price for the specified coin
price = URL(coin_id, {}).get("usd", "N/A")
if price == "N/A":
return f"Couldn't find data for '{coin_id}'. Please check the ID and try again."
else:
return f"The current price of {coin_id} is ${price:.2f} USD."
else:
return f"Error fetching data from CoinGecko. Status code: {response.status_code}"
# Gradio interface function
def crypto_price_gradio(coin_id):
return get_crypto_price(coin_id)
# Create the Gradio interface
iface = gr.Interface(
fn=crypto_price_gradio,
inputs=gr.Textbox(label="Enter Cryptocurrency ID (e.g., bitcoin, ethereum)"),
outputs=gr.Textbox(label="Price in USD"),
title="Cryptocurrency Price Checker",
description="Enter a cryptocurrency ID to get its current price in USD."
)
# Launch the Gradio interface
URL()
| [
"# Function to fetch the current price of a cryptocurrency from CoinGecko\ndef get_crypto_price(coin_id):\n # Define the endpoint and query parameters\n endpoint = f\"URL\n params = {\n \"ids\": coin_id, # ID of the cryptocurrency\n \"vs_currencies\": \"usd\", # Currency to compare with (USD)\n }\n # Make the GET request to CoinGecko API\n response = URL(endpoint, params=params)\n \n if response.status_code == 200:\n # Get the JSON data from the response\n data = URL()\n # Extract the price for the specified coin\n price = URL(coin_id, {}).get(\"usd\", \"N/A\")\n if price == \"N/A\":\n return f\"Couldn't find data for '{coin_id}'. Please check the ID and try again.\"\n else:\n return f\"The current price of {coin_id} is ${price:.2f} USD.\"\n else:\n return f\"Error fetching data from CoinGecko. Status code: {response.status_code}\"",
"# Gradio interface function\ndef crypto_price_gradio(coin_id):\n return get_crypto_price(coin_id)",
"# Create the Gradio interface\niface = gr.Interface(\n fn=crypto_price_gradio,\n inputs=gr.Textbox(label=\"Enter Cryptocurrency ID (e.g., bitcoin, ethereum)\"),\n outputs=gr.Textbox(label=\"Price in USD\"),\n title=\"Cryptocurrency Price Checker\",\n description=\"Enter a cryptocurrency ID to get its current price in USD.\"\n)",
"# Launch the Gradio interface\nURL()"
] | [
"TAGS\n#region-us \n",
"# Function to fetch the current price of a cryptocurrency from CoinGecko\ndef get_crypto_price(coin_id):\n # Define the endpoint and query parameters\n endpoint = f\"URL\n params = {\n \"ids\": coin_id, # ID of the cryptocurrency\n \"vs_currencies\": \"usd\", # Currency to compare with (USD)\n }\n # Make the GET request to CoinGecko API\n response = URL(endpoint, params=params)\n \n if response.status_code == 200:\n # Get the JSON data from the response\n data = URL()\n # Extract the price for the specified coin\n price = URL(coin_id, {}).get(\"usd\", \"N/A\")\n if price == \"N/A\":\n return f\"Couldn't find data for '{coin_id}'. Please check the ID and try again.\"\n else:\n return f\"The current price of {coin_id} is ${price:.2f} USD.\"\n else:\n return f\"Error fetching data from CoinGecko. Status code: {response.status_code}\"",
"# Gradio interface function\ndef crypto_price_gradio(coin_id):\n return get_crypto_price(coin_id)",
"# Create the Gradio interface\niface = gr.Interface(\n fn=crypto_price_gradio,\n inputs=gr.Textbox(label=\"Enter Cryptocurrency ID (e.g., bitcoin, ethereum)\"),\n outputs=gr.Textbox(label=\"Price in USD\"),\n title=\"Cryptocurrency Price Checker\",\n description=\"Enter a cryptocurrency ID to get its current price in USD.\"\n)",
"# Launch the Gradio interface\nURL()"
] | [
5,
258,
33,
109,
11
] | [
"TAGS\n#region-us \n# Function to fetch the current price of a cryptocurrency from CoinGecko\ndef get_crypto_price(coin_id):\n # Define the endpoint and query parameters\n endpoint = f\"URL\n params = {\n \"ids\": coin_id, # ID of the cryptocurrency\n \"vs_currencies\": \"usd\", # Currency to compare with (USD)\n }\n # Make the GET request to CoinGecko API\n response = URL(endpoint, params=params)\n \n if response.status_code == 200:\n # Get the JSON data from the response\n data = URL()\n # Extract the price for the specified coin\n price = URL(coin_id, {}).get(\"usd\", \"N/A\")\n if price == \"N/A\":\n return f\"Couldn't find data for '{coin_id}'. Please check the ID and try again.\"\n else:\n return f\"The current price of {coin_id} is ${price:.2f} USD.\"\n else:\n return f\"Error fetching data from CoinGecko. Status code: {response.status_code}\"# Gradio interface function\ndef crypto_price_gradio(coin_id):\n return get_crypto_price(coin_id)# Create the Gradio interface\niface = gr.Interface(\n fn=crypto_price_gradio,\n inputs=gr.Textbox(label=\"Enter Cryptocurrency ID (e.g., bitcoin, ethereum)\"),\n outputs=gr.Textbox(label=\"Price in USD\"),\n title=\"Cryptocurrency Price Checker\",\n description=\"Enter a cryptocurrency ID to get its current price in USD.\"\n)# Launch the Gradio interface\nURL()"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H4ac-seqsight_4096_512_15M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_15M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_15M) on the [mahdibaghbanzadeh/GUE_EMP_H4ac](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H4ac) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5714
- F1 Score: 0.7100
- Accuracy: 0.7097
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.6481 | 0.93 | 200 | 0.6032 | 0.6807 | 0.6806 |
| 0.6086 | 1.87 | 400 | 0.6010 | 0.6746 | 0.6765 |
| 0.5924 | 2.8 | 600 | 0.5803 | 0.6995 | 0.6991 |
| 0.585 | 3.74 | 800 | 0.5731 | 0.7076 | 0.7073 |
| 0.5775 | 4.67 | 1000 | 0.5705 | 0.7076 | 0.7073 |
| 0.5714 | 5.61 | 1200 | 0.5738 | 0.7068 | 0.7073 |
| 0.5705 | 6.54 | 1400 | 0.5651 | 0.7123 | 0.7120 |
| 0.5596 | 7.48 | 1600 | 0.5609 | 0.7138 | 0.7135 |
| 0.5668 | 8.41 | 1800 | 0.5605 | 0.7165 | 0.7161 |
| 0.5637 | 9.35 | 2000 | 0.5686 | 0.7122 | 0.7129 |
| 0.5557 | 10.28 | 2200 | 0.5603 | 0.7205 | 0.7202 |
| 0.5592 | 11.21 | 2400 | 0.5611 | 0.7178 | 0.7176 |
| 0.5556 | 12.15 | 2600 | 0.5620 | 0.7132 | 0.7132 |
| 0.5532 | 13.08 | 2800 | 0.5687 | 0.7144 | 0.7150 |
| 0.5512 | 14.02 | 3000 | 0.5651 | 0.7165 | 0.7167 |
| 0.5511 | 14.95 | 3200 | 0.5617 | 0.7211 | 0.7211 |
| 0.5489 | 15.89 | 3400 | 0.5745 | 0.7072 | 0.7091 |
| 0.5461 | 16.82 | 3600 | 0.5578 | 0.7225 | 0.7223 |
| 0.552 | 17.76 | 3800 | 0.5574 | 0.7210 | 0.7208 |
| 0.5484 | 18.69 | 4000 | 0.5621 | 0.7203 | 0.7205 |
| 0.5423 | 19.63 | 4200 | 0.5727 | 0.7072 | 0.7091 |
| 0.5467 | 20.56 | 4400 | 0.5544 | 0.7226 | 0.7223 |
| 0.5422 | 21.5 | 4600 | 0.5815 | 0.7032 | 0.7062 |
| 0.5429 | 22.43 | 4800 | 0.5593 | 0.7255 | 0.7255 |
| 0.5434 | 23.36 | 5000 | 0.5774 | 0.7102 | 0.7120 |
| 0.5419 | 24.3 | 5200 | 0.5572 | 0.7273 | 0.7273 |
| 0.5417 | 25.23 | 5400 | 0.5580 | 0.7236 | 0.7235 |
| 0.543 | 26.17 | 5600 | 0.5601 | 0.7247 | 0.7249 |
| 0.5417 | 27.1 | 5800 | 0.5582 | 0.7210 | 0.7211 |
| 0.537 | 28.04 | 6000 | 0.5559 | 0.7278 | 0.7276 |
| 0.5398 | 28.97 | 6200 | 0.5688 | 0.7181 | 0.7191 |
| 0.5388 | 29.91 | 6400 | 0.5613 | 0.7209 | 0.7211 |
| 0.5377 | 30.84 | 6600 | 0.5777 | 0.7103 | 0.7120 |
| 0.5377 | 31.78 | 6800 | 0.5595 | 0.7237 | 0.7238 |
| 0.5377 | 32.71 | 7000 | 0.5587 | 0.7241 | 0.7240 |
| 0.5366 | 33.64 | 7200 | 0.5622 | 0.7176 | 0.7185 |
| 0.5378 | 34.58 | 7400 | 0.5618 | 0.7205 | 0.7208 |
| 0.5409 | 35.51 | 7600 | 0.5575 | 0.7265 | 0.7267 |
| 0.5319 | 36.45 | 7800 | 0.5599 | 0.7228 | 0.7229 |
| 0.5382 | 37.38 | 8000 | 0.5710 | 0.7146 | 0.7161 |
| 0.5352 | 38.32 | 8200 | 0.5655 | 0.7174 | 0.7182 |
| 0.535 | 39.25 | 8400 | 0.5594 | 0.7232 | 0.7235 |
| 0.5389 | 40.19 | 8600 | 0.5579 | 0.7231 | 0.7232 |
| 0.5385 | 41.12 | 8800 | 0.5645 | 0.7188 | 0.7196 |
| 0.5339 | 42.06 | 9000 | 0.5638 | 0.7191 | 0.7196 |
| 0.5352 | 42.99 | 9200 | 0.5672 | 0.7173 | 0.7182 |
| 0.5333 | 43.93 | 9400 | 0.5679 | 0.7164 | 0.7173 |
| 0.5348 | 44.86 | 9600 | 0.5623 | 0.7203 | 0.7208 |
| 0.5339 | 45.79 | 9800 | 0.5631 | 0.7212 | 0.7217 |
| 0.5366 | 46.73 | 10000 | 0.5641 | 0.7205 | 0.7211 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_15M", "model-index": [{"name": "GUE_EMP_H4ac-seqsight_4096_512_15M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H4ac-seqsight_4096_512_15M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_15M",
"region:us"
] | null | 2024-05-03T18:12:27+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us
| GUE\_EMP\_H4ac-seqsight\_4096\_512\_15M-L1\_f
=============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_15M on the mahdibaghbanzadeh/GUE\_EMP\_H4ac dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5714
* F1 Score: 0.7100
* Accuracy: 0.7097
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
43,
100,
5,
52
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | aryachakraborty/moondreamFine-Tuned | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T18:12:37+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
26,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | ferrazzipietro/LS_Llama-2-7b-hf_adapters_en.layer1_NoQuant_16_64_0.05_4_0.0002 | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T18:12:41+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
26,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# prompt_fine_tuned_boolq_croslo
This model is a fine-tuned version of [EMBEDDIA/crosloengual-bert](https://huggingface.co/EMBEDDIA/crosloengual-bert) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6430
- Accuracy: 0.7778
- F1: 0.6806
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 400
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-------:|:----:|:---------------:|:--------:|:------:|
| 0.6715 | 4.1667 | 50 | 0.6810 | 0.6667 | 0.6222 |
| 0.6634 | 8.3333 | 100 | 0.6644 | 0.7778 | 0.6806 |
| 0.6531 | 12.5 | 150 | 0.6554 | 0.7778 | 0.6806 |
| 0.6609 | 16.6667 | 200 | 0.6495 | 0.7778 | 0.6806 |
| 0.6565 | 20.8333 | 250 | 0.6466 | 0.7778 | 0.6806 |
| 0.6514 | 25.0 | 300 | 0.6448 | 0.7778 | 0.6806 |
| 0.656 | 29.1667 | 350 | 0.6436 | 0.7778 | 0.6806 |
| 0.6584 | 33.3333 | 400 | 0.6430 | 0.7778 | 0.6806 |
### Framework versions
- PEFT 0.10.1.dev0
- Transformers 4.40.1
- Pytorch 2.3.0
- Datasets 2.19.0
- Tokenizers 0.19.1 | {"license": "cc-by-4.0", "library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "base_model": "EMBEDDIA/crosloengual-bert", "model-index": [{"name": "prompt_fine_tuned_boolq_croslo", "results": []}]} | lenatr99/prompt_fine_tuned_boolq_croslo | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:EMBEDDIA/crosloengual-bert",
"license:cc-by-4.0",
"region:us"
] | null | 2024-05-03T18:14:24+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-EMBEDDIA/crosloengual-bert #license-cc-by-4.0 #region-us
| prompt\_fine\_tuned\_boolq\_croslo
==================================
This model is a fine-tuned version of EMBEDDIA/crosloengual-bert on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.6430
* Accuracy: 0.7778
* F1: 0.6806
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 400
### Training results
### Framework versions
* PEFT 0.10.1.dev0
* Transformers 4.40.1
* Pytorch 2.3.0
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 400",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.1.dev0\n* Transformers 4.40.1\n* Pytorch 2.3.0\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-EMBEDDIA/crosloengual-bert #license-cc-by-4.0 #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 400",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.1.dev0\n* Transformers 4.40.1\n* Pytorch 2.3.0\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
44,
99,
5,
51
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-EMBEDDIA/crosloengual-bert #license-cc-by-4.0 #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 400### Training results### Framework versions\n\n\n* PEFT 0.10.1.dev0\n* Transformers 4.40.1\n* Pytorch 2.3.0\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | hi000000/llama2-koen_insta_TG | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-05-03T18:14:42+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
48,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | cilantro9246/h5yd4pk | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-03T18:14:52+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
47,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Base Noise Ko - Dearlie
This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the Noise Data dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9216
- Cer: 37.5124
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer |
|:-------------:|:------:|:----:|:---------------:|:-------:|
| 1.3536 | 0.8780 | 1000 | 1.3752 | 55.5864 |
| 0.944 | 1.7559 | 2000 | 1.0808 | 51.4185 |
| 0.5985 | 2.6339 | 3000 | 0.9612 | 40.2651 |
| 0.3207 | 3.5119 | 4000 | 0.9216 | 37.5124 |
### Framework versions
- Transformers 4.41.0.dev0
- Pytorch 2.3.0+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"language": ["ko"], "license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["AIHub/noise"], "base_model": "openai/whisper-base", "model-index": [{"name": "Whisper Base Noise Ko - Dearlie", "results": []}]} | Dearlie/whisper-noise4 | null | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T18:15:04+00:00 | [] | [
"ko"
] | TAGS
#transformers #tensorboard #safetensors #whisper #automatic-speech-recognition #endpoints_compatible #region-us
| Whisper Base Noise Ko - Dearlie
===============================
This model is a fine-tuned version of openai/whisper-base on the Noise Data dataset.
It achieves the following results on the evaluation set:
* Loss: 0.9216
* Cer: 37.5124
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0001
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 500
* training\_steps: 4000
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.41.0.dev0
* Pytorch 2.3.0+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* training\\_steps: 4000\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.41.0.dev0\n* Pytorch 2.3.0+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #whisper #automatic-speech-recognition #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* training\\_steps: 4000\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.41.0.dev0\n* Pytorch 2.3.0+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
27,
126,
5,
47
] | [
"TAGS\n#transformers #tensorboard #safetensors #whisper #automatic-speech-recognition #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* training\\_steps: 4000\n* mixed\\_precision\\_training: Native AMP### Training results### Framework versions\n\n\n* Transformers 4.41.0.dev0\n* Pytorch 2.3.0+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H4ac-seqsight_4096_512_15M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_15M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_15M) on the [mahdibaghbanzadeh/GUE_EMP_H4ac](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H4ac) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5615
- F1 Score: 0.7138
- Accuracy: 0.7135
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.6318 | 0.93 | 200 | 0.5938 | 0.6854 | 0.6850 |
| 0.5906 | 1.87 | 400 | 0.5763 | 0.7058 | 0.7056 |
| 0.5713 | 2.8 | 600 | 0.5644 | 0.7167 | 0.7164 |
| 0.5635 | 3.74 | 800 | 0.5581 | 0.7173 | 0.7170 |
| 0.5581 | 4.67 | 1000 | 0.5571 | 0.7243 | 0.7240 |
| 0.5507 | 5.61 | 1200 | 0.5653 | 0.7099 | 0.7111 |
| 0.5501 | 6.54 | 1400 | 0.5682 | 0.7098 | 0.7117 |
| 0.5378 | 7.48 | 1600 | 0.5529 | 0.7264 | 0.7264 |
| 0.5441 | 8.41 | 1800 | 0.5487 | 0.7334 | 0.7334 |
| 0.5411 | 9.35 | 2000 | 0.5536 | 0.7251 | 0.7261 |
| 0.5303 | 10.28 | 2200 | 0.5449 | 0.7342 | 0.7340 |
| 0.5347 | 11.21 | 2400 | 0.5496 | 0.7342 | 0.7346 |
| 0.5299 | 12.15 | 2600 | 0.5546 | 0.7299 | 0.7308 |
| 0.5252 | 13.08 | 2800 | 0.5557 | 0.7319 | 0.7326 |
| 0.5231 | 14.02 | 3000 | 0.5428 | 0.7395 | 0.7393 |
| 0.5218 | 14.95 | 3200 | 0.5472 | 0.7368 | 0.7370 |
| 0.5184 | 15.89 | 3400 | 0.5525 | 0.7284 | 0.7293 |
| 0.5149 | 16.82 | 3600 | 0.5551 | 0.7309 | 0.7320 |
| 0.5215 | 17.76 | 3800 | 0.5728 | 0.7206 | 0.7232 |
| 0.5161 | 18.69 | 4000 | 0.5411 | 0.7394 | 0.7393 |
| 0.509 | 19.63 | 4200 | 0.5479 | 0.7371 | 0.7378 |
| 0.5112 | 20.56 | 4400 | 0.5442 | 0.7384 | 0.7384 |
| 0.5077 | 21.5 | 4600 | 0.5840 | 0.7172 | 0.7205 |
| 0.5087 | 22.43 | 4800 | 0.5446 | 0.7359 | 0.7361 |
| 0.5076 | 23.36 | 5000 | 0.5622 | 0.7330 | 0.7343 |
| 0.5053 | 24.3 | 5200 | 0.5420 | 0.7372 | 0.7372 |
| 0.5054 | 25.23 | 5400 | 0.5466 | 0.7337 | 0.7337 |
| 0.505 | 26.17 | 5600 | 0.5463 | 0.7392 | 0.7393 |
| 0.5027 | 27.1 | 5800 | 0.5418 | 0.7386 | 0.7384 |
| 0.4982 | 28.04 | 6000 | 0.5483 | 0.7353 | 0.7352 |
| 0.501 | 28.97 | 6200 | 0.5582 | 0.7386 | 0.7393 |
| 0.499 | 29.91 | 6400 | 0.5603 | 0.7320 | 0.7328 |
| 0.4981 | 30.84 | 6600 | 0.5697 | 0.7291 | 0.7308 |
| 0.4972 | 31.78 | 6800 | 0.5533 | 0.7344 | 0.7346 |
| 0.4968 | 32.71 | 7000 | 0.5489 | 0.7356 | 0.7355 |
| 0.4939 | 33.64 | 7200 | 0.5589 | 0.7313 | 0.7326 |
| 0.4956 | 34.58 | 7400 | 0.5533 | 0.7376 | 0.7378 |
| 0.4986 | 35.51 | 7600 | 0.5550 | 0.7355 | 0.7361 |
| 0.4904 | 36.45 | 7800 | 0.5549 | 0.7362 | 0.7364 |
| 0.4937 | 37.38 | 8000 | 0.5652 | 0.7284 | 0.7299 |
| 0.4908 | 38.32 | 8200 | 0.5584 | 0.7325 | 0.7331 |
| 0.493 | 39.25 | 8400 | 0.5558 | 0.7345 | 0.7349 |
| 0.4948 | 40.19 | 8600 | 0.5554 | 0.7350 | 0.7352 |
| 0.4935 | 41.12 | 8800 | 0.5646 | 0.7306 | 0.7317 |
| 0.4886 | 42.06 | 9000 | 0.5614 | 0.7314 | 0.7320 |
| 0.4894 | 42.99 | 9200 | 0.5677 | 0.7303 | 0.7314 |
| 0.4894 | 43.93 | 9400 | 0.5625 | 0.7312 | 0.7320 |
| 0.4896 | 44.86 | 9600 | 0.5583 | 0.7320 | 0.7326 |
| 0.4881 | 45.79 | 9800 | 0.5592 | 0.7326 | 0.7331 |
| 0.491 | 46.73 | 10000 | 0.5614 | 0.7312 | 0.7320 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_15M", "model-index": [{"name": "GUE_EMP_H4ac-seqsight_4096_512_15M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H4ac-seqsight_4096_512_15M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_15M",
"region:us"
] | null | 2024-05-03T18:15:41+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us
| GUE\_EMP\_H4ac-seqsight\_4096\_512\_15M-L8\_f
=============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_15M on the mahdibaghbanzadeh/GUE\_EMP\_H4ac dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5615
* F1 Score: 0.7138
* Accuracy: 0.7135
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
43,
100,
5,
52
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H4ac-seqsight_4096_512_15M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_15M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_15M) on the [mahdibaghbanzadeh/GUE_EMP_H4ac](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H4ac) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5918
- F1 Score: 0.7236
- Accuracy: 0.7235
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.6186 | 0.93 | 200 | 0.5918 | 0.6933 | 0.6933 |
| 0.5793 | 1.87 | 400 | 0.5751 | 0.7018 | 0.7023 |
| 0.5591 | 2.8 | 600 | 0.5583 | 0.7250 | 0.7246 |
| 0.5512 | 3.74 | 800 | 0.5456 | 0.7332 | 0.7328 |
| 0.5439 | 4.67 | 1000 | 0.5440 | 0.7375 | 0.7372 |
| 0.5353 | 5.61 | 1200 | 0.5517 | 0.7298 | 0.7305 |
| 0.5335 | 6.54 | 1400 | 0.5603 | 0.7234 | 0.7252 |
| 0.5198 | 7.48 | 1600 | 0.5492 | 0.7367 | 0.7370 |
| 0.5218 | 8.41 | 1800 | 0.5441 | 0.7349 | 0.7349 |
| 0.5212 | 9.35 | 2000 | 0.5436 | 0.7359 | 0.7358 |
| 0.5087 | 10.28 | 2200 | 0.5425 | 0.7378 | 0.7375 |
| 0.51 | 11.21 | 2400 | 0.5551 | 0.7333 | 0.7340 |
| 0.5048 | 12.15 | 2600 | 0.5626 | 0.7342 | 0.7349 |
| 0.4987 | 13.08 | 2800 | 0.5563 | 0.7317 | 0.7320 |
| 0.4969 | 14.02 | 3000 | 0.5503 | 0.7403 | 0.7402 |
| 0.4925 | 14.95 | 3200 | 0.5561 | 0.7402 | 0.7402 |
| 0.4863 | 15.89 | 3400 | 0.5522 | 0.7392 | 0.7393 |
| 0.4841 | 16.82 | 3600 | 0.5643 | 0.7275 | 0.7282 |
| 0.4853 | 17.76 | 3800 | 0.6054 | 0.7100 | 0.7135 |
| 0.4789 | 18.69 | 4000 | 0.5535 | 0.7355 | 0.7355 |
| 0.4725 | 19.63 | 4200 | 0.5510 | 0.7376 | 0.7375 |
| 0.4712 | 20.56 | 4400 | 0.5582 | 0.7393 | 0.7393 |
| 0.4669 | 21.5 | 4600 | 0.5871 | 0.7199 | 0.7223 |
| 0.4668 | 22.43 | 4800 | 0.5562 | 0.7385 | 0.7384 |
| 0.4628 | 23.36 | 5000 | 0.5738 | 0.7307 | 0.7311 |
| 0.4609 | 24.3 | 5200 | 0.5597 | 0.7354 | 0.7358 |
| 0.4583 | 25.23 | 5400 | 0.5669 | 0.7351 | 0.7355 |
| 0.4558 | 26.17 | 5600 | 0.5710 | 0.7347 | 0.7346 |
| 0.4547 | 27.1 | 5800 | 0.5620 | 0.7265 | 0.7267 |
| 0.4461 | 28.04 | 6000 | 0.5753 | 0.7391 | 0.7390 |
| 0.4475 | 28.97 | 6200 | 0.5889 | 0.7266 | 0.7273 |
| 0.4476 | 29.91 | 6400 | 0.5917 | 0.7188 | 0.7205 |
| 0.4454 | 30.84 | 6600 | 0.5926 | 0.7258 | 0.7273 |
| 0.4415 | 31.78 | 6800 | 0.5790 | 0.7303 | 0.7308 |
| 0.4409 | 32.71 | 7000 | 0.5769 | 0.7406 | 0.7405 |
| 0.436 | 33.64 | 7200 | 0.5983 | 0.7237 | 0.7258 |
| 0.4375 | 34.58 | 7400 | 0.5855 | 0.7256 | 0.7264 |
| 0.4385 | 35.51 | 7600 | 0.5842 | 0.7239 | 0.7249 |
| 0.4315 | 36.45 | 7800 | 0.5995 | 0.7242 | 0.7252 |
| 0.4324 | 37.38 | 8000 | 0.5890 | 0.7295 | 0.7305 |
| 0.4295 | 38.32 | 8200 | 0.5873 | 0.7297 | 0.7302 |
| 0.4306 | 39.25 | 8400 | 0.5867 | 0.7323 | 0.7323 |
| 0.4288 | 40.19 | 8600 | 0.5916 | 0.7305 | 0.7308 |
| 0.4305 | 41.12 | 8800 | 0.5948 | 0.7248 | 0.7258 |
| 0.4235 | 42.06 | 9000 | 0.5942 | 0.7309 | 0.7314 |
| 0.4248 | 42.99 | 9200 | 0.6005 | 0.7256 | 0.7267 |
| 0.4218 | 43.93 | 9400 | 0.5973 | 0.7285 | 0.7293 |
| 0.4251 | 44.86 | 9600 | 0.5925 | 0.7291 | 0.7296 |
| 0.4187 | 45.79 | 9800 | 0.5941 | 0.7304 | 0.7308 |
| 0.4226 | 46.73 | 10000 | 0.5977 | 0.7289 | 0.7296 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_15M", "model-index": [{"name": "GUE_EMP_H4ac-seqsight_4096_512_15M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H4ac-seqsight_4096_512_15M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_15M",
"region:us"
] | null | 2024-05-03T18:16:09+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us
| GUE\_EMP\_H4ac-seqsight\_4096\_512\_15M-L32\_f
==============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_15M on the mahdibaghbanzadeh/GUE\_EMP\_H4ac dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5918
* F1 Score: 0.7236
* Accuracy: 0.7235
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
43,
100,
5,
52
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# robust_llm_pythia-160m_niki-041a_imdb_random-token-1280_10-rounds_seed-0
This model is a fine-tuned version of [EleutherAI/pythia-160m](https://huggingface.co/EleutherAI/pythia-160m) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "EleutherAI/pythia-160m", "model-index": [{"name": "robust_llm_pythia-160m_niki-041a_imdb_random-token-1280_10-rounds_seed-0", "results": []}]} | AlignmentResearch/robust_llm_pythia-160m_niki-041a_imdb_random-token-1280_10-rounds_seed-0 | null | [
"transformers",
"tensorboard",
"safetensors",
"gpt_neox",
"text-classification",
"generated_from_trainer",
"base_model:EleutherAI/pythia-160m",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-03T18:16:21+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #gpt_neox #text-classification #generated_from_trainer #base_model-EleutherAI/pythia-160m #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# robust_llm_pythia-160m_niki-041a_imdb_random-token-1280_10-rounds_seed-0
This model is a fine-tuned version of EleutherAI/pythia-160m on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1
- Datasets 2.18.0
- Tokenizers 0.15.2
| [
"# robust_llm_pythia-160m_niki-041a_imdb_random-token-1280_10-rounds_seed-0\n\nThis model is a fine-tuned version of EleutherAI/pythia-160m on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 0\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.1\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #gpt_neox #text-classification #generated_from_trainer #base_model-EleutherAI/pythia-160m #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# robust_llm_pythia-160m_niki-041a_imdb_random-token-1280_10-rounds_seed-0\n\nThis model is a fine-tuned version of EleutherAI/pythia-160m on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 0\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.1\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] | [
70,
63,
7,
9,
9,
4,
93,
5,
40
] | [
"TAGS\n#transformers #tensorboard #safetensors #gpt_neox #text-classification #generated_from_trainer #base_model-EleutherAI/pythia-160m #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# robust_llm_pythia-160m_niki-041a_imdb_random-token-1280_10-rounds_seed-0\n\nThis model is a fine-tuned version of EleutherAI/pythia-160m on an unknown dataset.## Model description\n\nMore information needed## Intended uses & limitations\n\nMore information needed## Training and evaluation data\n\nMore information needed## Training procedure### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 0\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1### Training results### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.1\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | Armandodelca/Prototipo_6_EMI | null | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T18:16:47+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
22,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#transformers #arxiv-1910.09700 #endpoints_compatible #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lora_fine_tuned_copa_croslo
This model is a fine-tuned version of [EMBEDDIA/crosloengual-bert](https://huggingface.co/EMBEDDIA/crosloengual-bert) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6931
- Accuracy: 0.49
- F1: 0.4913
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 400
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.7121 | 1.0 | 50 | 0.6931 | 0.52 | 0.5208 |
| 0.7076 | 2.0 | 100 | 0.6931 | 0.43 | 0.4312 |
| 0.7284 | 3.0 | 150 | 0.6931 | 0.48 | 0.4813 |
| 0.7243 | 4.0 | 200 | 0.6931 | 0.44 | 0.4369 |
| 0.7066 | 5.0 | 250 | 0.6931 | 0.45 | 0.4494 |
| 0.7135 | 6.0 | 300 | 0.6931 | 0.44 | 0.4409 |
| 0.7269 | 7.0 | 350 | 0.6931 | 0.49 | 0.4905 |
| 0.7078 | 8.0 | 400 | 0.6931 | 0.49 | 0.4913 |
### Framework versions
- PEFT 0.10.1.dev0
- Transformers 4.40.1
- Pytorch 2.3.0
- Datasets 2.19.0
- Tokenizers 0.19.1 | {"license": "cc-by-4.0", "library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "base_model": "EMBEDDIA/crosloengual-bert", "model-index": [{"name": "lora_fine_tuned_copa_croslo", "results": []}]} | lenatr99/lora_fine_tuned_copa_croslo | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:EMBEDDIA/crosloengual-bert",
"license:cc-by-4.0",
"region:us"
] | null | 2024-05-03T18:17:33+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-EMBEDDIA/crosloengual-bert #license-cc-by-4.0 #region-us
| lora\_fine\_tuned\_copa\_croslo
===============================
This model is a fine-tuned version of EMBEDDIA/crosloengual-bert on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.6931
* Accuracy: 0.49
* F1: 0.4913
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.003
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 400
### Training results
### Framework versions
* PEFT 0.10.1.dev0
* Transformers 4.40.1
* Pytorch 2.3.0
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 400",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.1.dev0\n* Transformers 4.40.1\n* Pytorch 2.3.0\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-EMBEDDIA/crosloengual-bert #license-cc-by-4.0 #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 400",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.1.dev0\n* Transformers 4.40.1\n* Pytorch 2.3.0\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
44,
99,
5,
51
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-EMBEDDIA/crosloengual-bert #license-cc-by-4.0 #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 400### Training results### Framework versions\n\n\n* PEFT 0.10.1.dev0\n* Transformers 4.40.1\n* Pytorch 2.3.0\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# loha_fine_tuned_copa_croslo
This model is a fine-tuned version of [EMBEDDIA/crosloengual-bert](https://huggingface.co/EMBEDDIA/crosloengual-bert) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2211
- Accuracy: 0.58
- F1: 0.5810
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 400
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.7164 | 1.0 | 50 | 0.6935 | 0.51 | 0.5095 |
| 0.6335 | 2.0 | 100 | 0.6819 | 0.49 | 0.4894 |
| 0.5152 | 3.0 | 150 | 0.6831 | 0.57 | 0.5683 |
| 0.353 | 4.0 | 200 | 0.7947 | 0.59 | 0.5910 |
| 0.2314 | 5.0 | 250 | 0.8754 | 0.6 | 0.6 |
| 0.0845 | 6.0 | 300 | 1.1719 | 0.59 | 0.5904 |
| 0.0734 | 7.0 | 350 | 1.1997 | 0.57 | 0.5709 |
| 0.0537 | 8.0 | 400 | 1.2211 | 0.58 | 0.5810 |
### Framework versions
- PEFT 0.10.1.dev0
- Transformers 4.40.1
- Pytorch 2.3.0
- Datasets 2.19.0
- Tokenizers 0.19.1 | {"license": "cc-by-4.0", "library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "base_model": "EMBEDDIA/crosloengual-bert", "model-index": [{"name": "loha_fine_tuned_copa_croslo", "results": []}]} | lenatr99/loha_fine_tuned_copa_croslo | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:EMBEDDIA/crosloengual-bert",
"license:cc-by-4.0",
"region:us"
] | null | 2024-05-03T18:18:03+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-EMBEDDIA/crosloengual-bert #license-cc-by-4.0 #region-us
| loha\_fine\_tuned\_copa\_croslo
===============================
This model is a fine-tuned version of EMBEDDIA/crosloengual-bert on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 1.2211
* Accuracy: 0.58
* F1: 0.5810
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.003
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 400
### Training results
### Framework versions
* PEFT 0.10.1.dev0
* Transformers 4.40.1
* Pytorch 2.3.0
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 400",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.1.dev0\n* Transformers 4.40.1\n* Pytorch 2.3.0\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-EMBEDDIA/crosloengual-bert #license-cc-by-4.0 #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 400",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.1.dev0\n* Transformers 4.40.1\n* Pytorch 2.3.0\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
44,
99,
5,
51
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-EMBEDDIA/crosloengual-bert #license-cc-by-4.0 #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 400### Training results### Framework versions\n\n\n* PEFT 0.10.1.dev0\n* Transformers 4.40.1\n* Pytorch 2.3.0\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
null | peft |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.0 | {"library_name": "peft", "base_model": "meta-llama/Meta-Llama-3-70B-Instruct"} | asbabiy/AspectLens-BA-Large-V2 | null | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Meta-Llama-3-70B-Instruct",
"region:us"
] | null | 2024-05-03T18:18:53+00:00 | [
"1910.09700"
] | [] | TAGS
#peft #safetensors #arxiv-1910.09700 #base_model-meta-llama/Meta-Llama-3-70B-Instruct #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
### Framework versions
- PEFT 0.10.0 | [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.10.0"
] | [
"TAGS\n#peft #safetensors #arxiv-1910.09700 #base_model-meta-llama/Meta-Llama-3-70B-Instruct #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.10.0"
] | [
44,
6,
4,
50,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5,
13
] | [
"TAGS\n#peft #safetensors #arxiv-1910.09700 #base_model-meta-llama/Meta-Llama-3-70B-Instruct #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact### Framework versions\n\n- PEFT 0.10.0"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | ferrazzipietro/LS_Llama-2-7b-hf_adapters_en.layer1_NoQuant_16_64_0.05_8_5e-05 | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T18:19:07+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
26,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | halitefe/lora_finetuned_turkish_gpt2_large_750m_instruct_v0.1_rag | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T18:21:41+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
26,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |
reinforcement-learning | null |
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
| {"tags": ["CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class"], "model-index": [{"name": "Reinforce-cartpole-v1", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "CartPole-v1", "type": "CartPole-v1"}, "metrics": [{"type": "mean_reward", "value": "500.00 +/- 0.00", "name": "mean_reward", "verified": false}]}]}]} | miansumairjaved/Reinforce-cartpole-v1 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | null | 2024-05-03T18:23:26+00:00 | [] | [] | TAGS
#CartPole-v1 #reinforce #reinforcement-learning #custom-implementation #deep-rl-class #model-index #region-us
|
# Reinforce Agent playing CartPole-v1
This is a trained model of a Reinforce agent playing CartPole-v1 .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: URL
| [
"# Reinforce Agent playing CartPole-v1\n This is a trained model of a Reinforce agent playing CartPole-v1 .\n To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: URL"
] | [
"TAGS\n#CartPole-v1 #reinforce #reinforcement-learning #custom-implementation #deep-rl-class #model-index #region-us \n",
"# Reinforce Agent playing CartPole-v1\n This is a trained model of a Reinforce agent playing CartPole-v1 .\n To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: URL"
] | [
32,
46
] | [
"TAGS\n#CartPole-v1 #reinforce #reinforcement-learning #custom-implementation #deep-rl-class #model-index #region-us \n# Reinforce Agent playing CartPole-v1\n This is a trained model of a Reinforce agent playing CartPole-v1 .\n To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: URL"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K79me3-seqsight_4096_512_15M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_15M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_15M) on the [mahdibaghbanzadeh/GUE_EMP_H3K79me3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K79me3) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4457
- F1 Score: 0.8060
- Accuracy: 0.8069
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5537 | 1.1 | 200 | 0.4738 | 0.7918 | 0.7920 |
| 0.4851 | 2.21 | 400 | 0.4603 | 0.7955 | 0.7968 |
| 0.4753 | 3.31 | 600 | 0.4520 | 0.8051 | 0.8058 |
| 0.4634 | 4.42 | 800 | 0.4507 | 0.8038 | 0.8051 |
| 0.4622 | 5.52 | 1000 | 0.4539 | 0.8018 | 0.8037 |
| 0.4554 | 6.63 | 1200 | 0.4521 | 0.8011 | 0.8031 |
| 0.4584 | 7.73 | 1400 | 0.4403 | 0.8096 | 0.8103 |
| 0.4507 | 8.84 | 1600 | 0.4462 | 0.8047 | 0.8065 |
| 0.4505 | 9.94 | 1800 | 0.4404 | 0.8126 | 0.8124 |
| 0.4472 | 11.05 | 2000 | 0.4417 | 0.8062 | 0.8079 |
| 0.4472 | 12.15 | 2200 | 0.4370 | 0.8116 | 0.8128 |
| 0.4408 | 13.26 | 2400 | 0.4368 | 0.8129 | 0.8141 |
| 0.4424 | 14.36 | 2600 | 0.4336 | 0.8177 | 0.8183 |
| 0.4411 | 15.47 | 2800 | 0.4508 | 0.7965 | 0.7996 |
| 0.4366 | 16.57 | 3000 | 0.4340 | 0.8155 | 0.8162 |
| 0.4371 | 17.68 | 3200 | 0.4346 | 0.8081 | 0.8093 |
| 0.4344 | 18.78 | 3400 | 0.4309 | 0.8142 | 0.8148 |
| 0.4342 | 19.89 | 3600 | 0.4443 | 0.8038 | 0.8062 |
| 0.4343 | 20.99 | 3800 | 0.4393 | 0.8099 | 0.8114 |
| 0.4339 | 22.1 | 4000 | 0.4330 | 0.8094 | 0.8107 |
| 0.432 | 23.2 | 4200 | 0.4374 | 0.8099 | 0.8114 |
| 0.4332 | 24.31 | 4400 | 0.4383 | 0.8096 | 0.8110 |
| 0.4237 | 25.41 | 4600 | 0.4381 | 0.8125 | 0.8138 |
| 0.432 | 26.52 | 4800 | 0.4335 | 0.8148 | 0.8159 |
| 0.4322 | 27.62 | 5000 | 0.4324 | 0.8133 | 0.8145 |
| 0.426 | 28.73 | 5200 | 0.4322 | 0.8120 | 0.8135 |
| 0.4264 | 29.83 | 5400 | 0.4309 | 0.8122 | 0.8135 |
| 0.4252 | 30.94 | 5600 | 0.4313 | 0.8132 | 0.8141 |
| 0.4276 | 32.04 | 5800 | 0.4303 | 0.8170 | 0.8176 |
| 0.4259 | 33.15 | 6000 | 0.4315 | 0.8133 | 0.8145 |
| 0.4221 | 34.25 | 6200 | 0.4340 | 0.8123 | 0.8135 |
| 0.4239 | 35.36 | 6400 | 0.4388 | 0.8109 | 0.8124 |
| 0.4252 | 36.46 | 6600 | 0.4284 | 0.8157 | 0.8166 |
| 0.4245 | 37.57 | 6800 | 0.4319 | 0.8140 | 0.8152 |
| 0.4209 | 38.67 | 7000 | 0.4333 | 0.8129 | 0.8141 |
| 0.4222 | 39.78 | 7200 | 0.4320 | 0.8141 | 0.8152 |
| 0.424 | 40.88 | 7400 | 0.4307 | 0.8145 | 0.8155 |
| 0.4232 | 41.99 | 7600 | 0.4325 | 0.8148 | 0.8159 |
| 0.4209 | 43.09 | 7800 | 0.4339 | 0.8149 | 0.8162 |
| 0.421 | 44.2 | 8000 | 0.4324 | 0.8147 | 0.8159 |
| 0.4206 | 45.3 | 8200 | 0.4294 | 0.8163 | 0.8173 |
| 0.4211 | 46.41 | 8400 | 0.4302 | 0.8162 | 0.8173 |
| 0.4216 | 47.51 | 8600 | 0.4322 | 0.8140 | 0.8152 |
| 0.419 | 48.62 | 8800 | 0.4332 | 0.8129 | 0.8141 |
| 0.4188 | 49.72 | 9000 | 0.4320 | 0.8151 | 0.8162 |
| 0.4241 | 50.83 | 9200 | 0.4340 | 0.8114 | 0.8128 |
| 0.4191 | 51.93 | 9400 | 0.4317 | 0.8151 | 0.8162 |
| 0.4191 | 53.04 | 9600 | 0.4318 | 0.8151 | 0.8162 |
| 0.4189 | 54.14 | 9800 | 0.4311 | 0.8148 | 0.8159 |
| 0.4188 | 55.25 | 10000 | 0.4326 | 0.8147 | 0.8159 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_15M", "model-index": [{"name": "GUE_EMP_H3K79me3-seqsight_4096_512_15M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K79me3-seqsight_4096_512_15M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_15M",
"region:us"
] | null | 2024-05-03T18:25:32+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us
| GUE\_EMP\_H3K79me3-seqsight\_4096\_512\_15M-L1\_f
=================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_15M on the mahdibaghbanzadeh/GUE\_EMP\_H3K79me3 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4457
* F1 Score: 0.8060
* Accuracy: 0.8069
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
43,
100,
5,
52
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | ferrazzipietro/LS_Llama-2-7b-hf_adapters_en.layer1_NoQuant_16_64_0.05_8_0.0002 | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T18:25:34+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
26,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |
text-to-image | diffusers |
# AutoTrain SDXL LoRA DreamBooth - viinayhegde/sdxlprivate
<Gallery />
## Model description
These are viinayhegde/sdxlprivate LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: None.
## Trigger words
You should use photo of vinay hegde to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](viinayhegde/sdxlprivate/tree/main) them in the Files & versions tab.
| {"license": "openrail++", "tags": ["autotrain", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "diffusers", "lora", "template:sd-lora"], "base_model": "stabilityai/stable-diffusion-xl-base-1.0", "instance_prompt": "photo of vinay hegde"} | viinayhegde/sdxlprivate | null | [
"diffusers",
"autotrain",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | null | 2024-05-03T18:25:42+00:00 | [] | [] | TAGS
#diffusers #autotrain #stable-diffusion-xl #stable-diffusion-xl-diffusers #text-to-image #lora #template-sd-lora #base_model-stabilityai/stable-diffusion-xl-base-1.0 #license-openrail++ #region-us
|
# AutoTrain SDXL LoRA DreamBooth - viinayhegde/sdxlprivate
<Gallery />
## Model description
These are viinayhegde/sdxlprivate LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using DreamBooth.
LoRA for the text encoder was enabled: False.
Special VAE used for training: None.
## Trigger words
You should use photo of vinay hegde to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
Download them in the Files & versions tab.
| [
"# AutoTrain SDXL LoRA DreamBooth - viinayhegde/sdxlprivate\n\n<Gallery />",
"## Model description\n\nThese are viinayhegde/sdxlprivate LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.\n\nThe weights were trained using DreamBooth.\n\nLoRA for the text encoder was enabled: False.\n\nSpecial VAE used for training: None.",
"## Trigger words\n\nYou should use photo of vinay hegde to trigger the image generation.",
"## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab."
] | [
"TAGS\n#diffusers #autotrain #stable-diffusion-xl #stable-diffusion-xl-diffusers #text-to-image #lora #template-sd-lora #base_model-stabilityai/stable-diffusion-xl-base-1.0 #license-openrail++ #region-us \n",
"# AutoTrain SDXL LoRA DreamBooth - viinayhegde/sdxlprivate\n\n<Gallery />",
"## Model description\n\nThese are viinayhegde/sdxlprivate LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.\n\nThe weights were trained using DreamBooth.\n\nLoRA for the text encoder was enabled: False.\n\nSpecial VAE used for training: None.",
"## Trigger words\n\nYou should use photo of vinay hegde to trigger the image generation.",
"## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab."
] | [
68,
27,
69,
20,
25
] | [
"TAGS\n#diffusers #autotrain #stable-diffusion-xl #stable-diffusion-xl-diffusers #text-to-image #lora #template-sd-lora #base_model-stabilityai/stable-diffusion-xl-base-1.0 #license-openrail++ #region-us \n# AutoTrain SDXL LoRA DreamBooth - viinayhegde/sdxlprivate\n\n<Gallery />## Model description\n\nThese are viinayhegde/sdxlprivate LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.\n\nThe weights were trained using DreamBooth.\n\nLoRA for the text encoder was enabled: False.\n\nSpecial VAE used for training: None.## Trigger words\n\nYou should use photo of vinay hegde to trigger the image generation.## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab."
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K79me3-seqsight_4096_512_15M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_15M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_15M) on the [mahdibaghbanzadeh/GUE_EMP_H3K79me3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K79me3) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4311
- F1 Score: 0.8116
- Accuracy: 0.8121
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5246 | 1.1 | 200 | 0.4538 | 0.8062 | 0.8062 |
| 0.4671 | 2.21 | 400 | 0.4454 | 0.8068 | 0.8076 |
| 0.4603 | 3.31 | 600 | 0.4437 | 0.8077 | 0.8089 |
| 0.4477 | 4.42 | 800 | 0.4433 | 0.8061 | 0.8079 |
| 0.4451 | 5.52 | 1000 | 0.4458 | 0.8040 | 0.8062 |
| 0.437 | 6.63 | 1200 | 0.4462 | 0.8026 | 0.8048 |
| 0.4394 | 7.73 | 1400 | 0.4345 | 0.8069 | 0.8083 |
| 0.4298 | 8.84 | 1600 | 0.4494 | 0.8015 | 0.8037 |
| 0.4286 | 9.94 | 1800 | 0.4278 | 0.8150 | 0.8152 |
| 0.4244 | 11.05 | 2000 | 0.4401 | 0.8080 | 0.8093 |
| 0.4236 | 12.15 | 2200 | 0.4264 | 0.8179 | 0.8183 |
| 0.4189 | 13.26 | 2400 | 0.4320 | 0.8129 | 0.8141 |
| 0.4191 | 14.36 | 2600 | 0.4236 | 0.8180 | 0.8183 |
| 0.4178 | 15.47 | 2800 | 0.4400 | 0.8072 | 0.8093 |
| 0.4122 | 16.57 | 3000 | 0.4262 | 0.8192 | 0.8193 |
| 0.4135 | 17.68 | 3200 | 0.4285 | 0.8165 | 0.8176 |
| 0.41 | 18.78 | 3400 | 0.4339 | 0.8130 | 0.8145 |
| 0.4086 | 19.89 | 3600 | 0.4280 | 0.8148 | 0.8159 |
| 0.406 | 20.99 | 3800 | 0.4364 | 0.8078 | 0.8093 |
| 0.4063 | 22.1 | 4000 | 0.4271 | 0.8124 | 0.8135 |
| 0.4039 | 23.2 | 4200 | 0.4332 | 0.8129 | 0.8141 |
| 0.4037 | 24.31 | 4400 | 0.4381 | 0.8081 | 0.8096 |
| 0.3954 | 25.41 | 4600 | 0.4299 | 0.8154 | 0.8162 |
| 0.4022 | 26.52 | 4800 | 0.4314 | 0.8146 | 0.8155 |
| 0.4034 | 27.62 | 5000 | 0.4370 | 0.8103 | 0.8121 |
| 0.3952 | 28.73 | 5200 | 0.4399 | 0.8047 | 0.8069 |
| 0.3944 | 29.83 | 5400 | 0.4279 | 0.8148 | 0.8155 |
| 0.3933 | 30.94 | 5600 | 0.4331 | 0.8146 | 0.8155 |
| 0.3949 | 32.04 | 5800 | 0.4285 | 0.8166 | 0.8173 |
| 0.3933 | 33.15 | 6000 | 0.4390 | 0.8064 | 0.8083 |
| 0.3869 | 34.25 | 6200 | 0.4355 | 0.8165 | 0.8173 |
| 0.3893 | 35.36 | 6400 | 0.4416 | 0.8117 | 0.8131 |
| 0.3915 | 36.46 | 6600 | 0.4334 | 0.8137 | 0.8148 |
| 0.3889 | 37.57 | 6800 | 0.4409 | 0.8091 | 0.8107 |
| 0.3864 | 38.67 | 7000 | 0.4365 | 0.8104 | 0.8117 |
| 0.3862 | 39.78 | 7200 | 0.4391 | 0.8099 | 0.8114 |
| 0.3892 | 40.88 | 7400 | 0.4320 | 0.8171 | 0.8180 |
| 0.3871 | 41.99 | 7600 | 0.4364 | 0.8145 | 0.8155 |
| 0.3849 | 43.09 | 7800 | 0.4407 | 0.8083 | 0.8100 |
| 0.3847 | 44.2 | 8000 | 0.4405 | 0.8077 | 0.8093 |
| 0.3848 | 45.3 | 8200 | 0.4326 | 0.8153 | 0.8162 |
| 0.3812 | 46.41 | 8400 | 0.4347 | 0.8154 | 0.8162 |
| 0.3847 | 47.51 | 8600 | 0.4451 | 0.8068 | 0.8086 |
| 0.3809 | 48.62 | 8800 | 0.4413 | 0.8110 | 0.8124 |
| 0.3806 | 49.72 | 9000 | 0.4404 | 0.8107 | 0.8121 |
| 0.3851 | 50.83 | 9200 | 0.4453 | 0.8067 | 0.8086 |
| 0.3806 | 51.93 | 9400 | 0.4405 | 0.8092 | 0.8107 |
| 0.3785 | 53.04 | 9600 | 0.4401 | 0.8114 | 0.8128 |
| 0.3798 | 54.14 | 9800 | 0.4395 | 0.8122 | 0.8135 |
| 0.3796 | 55.25 | 10000 | 0.4421 | 0.8092 | 0.8107 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_15M", "model-index": [{"name": "GUE_EMP_H3K79me3-seqsight_4096_512_15M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K79me3-seqsight_4096_512_15M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_15M",
"region:us"
] | null | 2024-05-03T18:25:47+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us
| GUE\_EMP\_H3K79me3-seqsight\_4096\_512\_15M-L8\_f
=================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_15M on the mahdibaghbanzadeh/GUE\_EMP\_H3K79me3 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4311
* F1 Score: 0.8116
* Accuracy: 0.8121
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
43,
100,
5,
52
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K79me3-seqsight_4096_512_15M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_15M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_15M) on the [mahdibaghbanzadeh/GUE_EMP_H3K79me3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K79me3) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4331
- F1 Score: 0.8142
- Accuracy: 0.8148
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5113 | 1.1 | 200 | 0.4452 | 0.8090 | 0.8093 |
| 0.4599 | 2.21 | 400 | 0.4383 | 0.8131 | 0.8135 |
| 0.4502 | 3.31 | 600 | 0.4433 | 0.8050 | 0.8065 |
| 0.4351 | 4.42 | 800 | 0.4438 | 0.8031 | 0.8051 |
| 0.4326 | 5.52 | 1000 | 0.4385 | 0.8093 | 0.8110 |
| 0.4231 | 6.63 | 1200 | 0.4482 | 0.7998 | 0.8024 |
| 0.4248 | 7.73 | 1400 | 0.4374 | 0.7989 | 0.8013 |
| 0.4148 | 8.84 | 1600 | 0.4559 | 0.8028 | 0.8058 |
| 0.4113 | 9.94 | 1800 | 0.4244 | 0.8202 | 0.8204 |
| 0.4057 | 11.05 | 2000 | 0.4385 | 0.8075 | 0.8089 |
| 0.4039 | 12.15 | 2200 | 0.4249 | 0.8204 | 0.8211 |
| 0.3986 | 13.26 | 2400 | 0.4289 | 0.8198 | 0.8207 |
| 0.3957 | 14.36 | 2600 | 0.4236 | 0.8242 | 0.8245 |
| 0.3929 | 15.47 | 2800 | 0.4455 | 0.8033 | 0.8058 |
| 0.3853 | 16.57 | 3000 | 0.4284 | 0.8231 | 0.8235 |
| 0.3843 | 17.68 | 3200 | 0.4365 | 0.8185 | 0.8193 |
| 0.3794 | 18.78 | 3400 | 0.4657 | 0.8000 | 0.8031 |
| 0.3756 | 19.89 | 3600 | 0.4370 | 0.8166 | 0.8176 |
| 0.3692 | 20.99 | 3800 | 0.4481 | 0.8101 | 0.8117 |
| 0.367 | 22.1 | 4000 | 0.4403 | 0.8158 | 0.8169 |
| 0.3661 | 23.2 | 4200 | 0.4496 | 0.8104 | 0.8117 |
| 0.3618 | 24.31 | 4400 | 0.4490 | 0.8087 | 0.8103 |
| 0.35 | 25.41 | 4600 | 0.4415 | 0.8242 | 0.8245 |
| 0.358 | 26.52 | 4800 | 0.4569 | 0.8088 | 0.8103 |
| 0.3544 | 27.62 | 5000 | 0.4640 | 0.8001 | 0.8024 |
| 0.3454 | 28.73 | 5200 | 0.4732 | 0.7968 | 0.7996 |
| 0.3415 | 29.83 | 5400 | 0.4413 | 0.8129 | 0.8135 |
| 0.3402 | 30.94 | 5600 | 0.4611 | 0.8078 | 0.8093 |
| 0.3413 | 32.04 | 5800 | 0.4476 | 0.8137 | 0.8145 |
| 0.3338 | 33.15 | 6000 | 0.4726 | 0.8028 | 0.8048 |
| 0.3288 | 34.25 | 6200 | 0.4716 | 0.8066 | 0.8079 |
| 0.329 | 35.36 | 6400 | 0.4770 | 0.8077 | 0.8093 |
| 0.3284 | 36.46 | 6600 | 0.4706 | 0.8028 | 0.8048 |
| 0.3228 | 37.57 | 6800 | 0.4752 | 0.8056 | 0.8072 |
| 0.3198 | 38.67 | 7000 | 0.4826 | 0.8000 | 0.8024 |
| 0.3236 | 39.78 | 7200 | 0.4756 | 0.8055 | 0.8072 |
| 0.3204 | 40.88 | 7400 | 0.4612 | 0.8092 | 0.8100 |
| 0.3184 | 41.99 | 7600 | 0.4703 | 0.8094 | 0.8107 |
| 0.3159 | 43.09 | 7800 | 0.4936 | 0.7955 | 0.7985 |
| 0.3103 | 44.2 | 8000 | 0.4910 | 0.8016 | 0.8034 |
| 0.312 | 45.3 | 8200 | 0.4685 | 0.8075 | 0.8086 |
| 0.3057 | 46.41 | 8400 | 0.4802 | 0.8050 | 0.8062 |
| 0.3104 | 47.51 | 8600 | 0.4947 | 0.7999 | 0.8020 |
| 0.3048 | 48.62 | 8800 | 0.4879 | 0.8034 | 0.8051 |
| 0.3073 | 49.72 | 9000 | 0.4780 | 0.8044 | 0.8058 |
| 0.3087 | 50.83 | 9200 | 0.4897 | 0.8013 | 0.8034 |
| 0.3026 | 51.93 | 9400 | 0.4837 | 0.8026 | 0.8041 |
| 0.2997 | 53.04 | 9600 | 0.4861 | 0.8033 | 0.8048 |
| 0.3015 | 54.14 | 9800 | 0.4860 | 0.8033 | 0.8048 |
| 0.3011 | 55.25 | 10000 | 0.4916 | 0.8040 | 0.8058 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_15M", "model-index": [{"name": "GUE_EMP_H3K79me3-seqsight_4096_512_15M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K79me3-seqsight_4096_512_15M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_15M",
"region:us"
] | null | 2024-05-03T18:26:34+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us
| GUE\_EMP\_H3K79me3-seqsight\_4096\_512\_15M-L32\_f
==================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_15M on the mahdibaghbanzadeh/GUE\_EMP\_H3K79me3 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4331
* F1 Score: 0.8142
* Accuracy: 0.8148
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
43,
100,
5,
52
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K4me1-seqsight_4096_512_15M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_15M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_15M) on the [mahdibaghbanzadeh/GUE_EMP_H3K4me1](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K4me1) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5443
- F1 Score: 0.7416
- Accuracy: 0.7427
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.6404 | 1.01 | 200 | 0.6218 | 0.6602 | 0.6705 |
| 0.6099 | 2.02 | 400 | 0.6044 | 0.6924 | 0.6954 |
| 0.5942 | 3.03 | 600 | 0.5909 | 0.7087 | 0.7105 |
| 0.5819 | 4.04 | 800 | 0.5810 | 0.7152 | 0.7169 |
| 0.5707 | 5.05 | 1000 | 0.5720 | 0.7213 | 0.7225 |
| 0.5649 | 6.06 | 1200 | 0.5682 | 0.7289 | 0.7317 |
| 0.5614 | 7.07 | 1400 | 0.5630 | 0.7305 | 0.7330 |
| 0.5619 | 8.08 | 1600 | 0.5613 | 0.7296 | 0.7330 |
| 0.557 | 9.09 | 1800 | 0.5582 | 0.7308 | 0.7330 |
| 0.5536 | 10.1 | 2000 | 0.5553 | 0.7339 | 0.7361 |
| 0.5518 | 11.11 | 2200 | 0.5586 | 0.7302 | 0.7330 |
| 0.5515 | 12.12 | 2400 | 0.5533 | 0.7372 | 0.7386 |
| 0.5506 | 13.13 | 2600 | 0.5526 | 0.7348 | 0.7374 |
| 0.5507 | 14.14 | 2800 | 0.5499 | 0.7342 | 0.7361 |
| 0.5451 | 15.15 | 3000 | 0.5526 | 0.7323 | 0.7348 |
| 0.5455 | 16.16 | 3200 | 0.5532 | 0.7353 | 0.7386 |
| 0.5455 | 17.17 | 3400 | 0.5512 | 0.7348 | 0.7374 |
| 0.5464 | 18.18 | 3600 | 0.5460 | 0.7424 | 0.7440 |
| 0.5426 | 19.19 | 3800 | 0.5503 | 0.7375 | 0.7399 |
| 0.5438 | 20.2 | 4000 | 0.5460 | 0.7401 | 0.7421 |
| 0.5414 | 21.21 | 4200 | 0.5539 | 0.7321 | 0.7361 |
| 0.5414 | 22.22 | 4400 | 0.5490 | 0.7385 | 0.7405 |
| 0.537 | 23.23 | 4600 | 0.5520 | 0.7362 | 0.7396 |
| 0.5459 | 24.24 | 4800 | 0.5460 | 0.7376 | 0.7408 |
| 0.5353 | 25.25 | 5000 | 0.5464 | 0.7408 | 0.7424 |
| 0.542 | 26.26 | 5200 | 0.5458 | 0.7430 | 0.7440 |
| 0.5376 | 27.27 | 5400 | 0.5450 | 0.7365 | 0.7393 |
| 0.5365 | 28.28 | 5600 | 0.5455 | 0.7401 | 0.7415 |
| 0.5414 | 29.29 | 5800 | 0.5465 | 0.7363 | 0.7396 |
| 0.5365 | 30.3 | 6000 | 0.5441 | 0.7403 | 0.7424 |
| 0.539 | 31.31 | 6200 | 0.5432 | 0.7400 | 0.7421 |
| 0.5376 | 32.32 | 6400 | 0.5477 | 0.7350 | 0.7386 |
| 0.5351 | 33.33 | 6600 | 0.5460 | 0.7384 | 0.7415 |
| 0.5349 | 34.34 | 6800 | 0.5464 | 0.7382 | 0.7415 |
| 0.5328 | 35.35 | 7000 | 0.5432 | 0.7406 | 0.7424 |
| 0.538 | 36.36 | 7200 | 0.5427 | 0.7417 | 0.7440 |
| 0.5348 | 37.37 | 7400 | 0.5423 | 0.7388 | 0.7408 |
| 0.5337 | 38.38 | 7600 | 0.5425 | 0.7416 | 0.7431 |
| 0.533 | 39.39 | 7800 | 0.5435 | 0.7406 | 0.7424 |
| 0.5348 | 40.4 | 8000 | 0.5437 | 0.7402 | 0.7424 |
| 0.5316 | 41.41 | 8200 | 0.5453 | 0.7372 | 0.7402 |
| 0.5359 | 42.42 | 8400 | 0.5448 | 0.7405 | 0.7434 |
| 0.5321 | 43.43 | 8600 | 0.5444 | 0.7395 | 0.7421 |
| 0.5326 | 44.44 | 8800 | 0.5436 | 0.7393 | 0.7418 |
| 0.5324 | 45.45 | 9000 | 0.5435 | 0.7407 | 0.7427 |
| 0.5333 | 46.46 | 9200 | 0.5435 | 0.7399 | 0.7424 |
| 0.5365 | 47.47 | 9400 | 0.5434 | 0.7394 | 0.7421 |
| 0.5271 | 48.48 | 9600 | 0.5444 | 0.7398 | 0.7424 |
| 0.5359 | 49.49 | 9800 | 0.5438 | 0.7392 | 0.7418 |
| 0.5344 | 50.51 | 10000 | 0.5436 | 0.7395 | 0.7421 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_15M", "model-index": [{"name": "GUE_EMP_H3K4me1-seqsight_4096_512_15M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K4me1-seqsight_4096_512_15M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_15M",
"region:us"
] | null | 2024-05-03T18:26:45+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us
| GUE\_EMP\_H3K4me1-seqsight\_4096\_512\_15M-L1\_f
================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_15M on the mahdibaghbanzadeh/GUE\_EMP\_H3K4me1 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5443
* F1 Score: 0.7416
* Accuracy: 0.7427
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
43,
100,
5,
52
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K4me1-seqsight_4096_512_15M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_15M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_15M) on the [mahdibaghbanzadeh/GUE_EMP_H3K4me1](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K4me1) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5390
- F1 Score: 0.7439
- Accuracy: 0.7456
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.6264 | 1.01 | 200 | 0.6038 | 0.6919 | 0.6979 |
| 0.5874 | 2.02 | 400 | 0.5786 | 0.7160 | 0.7188 |
| 0.5675 | 3.03 | 600 | 0.5645 | 0.7258 | 0.7282 |
| 0.5597 | 4.04 | 800 | 0.5605 | 0.7309 | 0.7330 |
| 0.5539 | 5.05 | 1000 | 0.5544 | 0.7352 | 0.7361 |
| 0.5492 | 6.06 | 1200 | 0.5534 | 0.7295 | 0.7326 |
| 0.5455 | 7.07 | 1400 | 0.5509 | 0.7368 | 0.7393 |
| 0.5449 | 8.08 | 1600 | 0.5499 | 0.7367 | 0.7399 |
| 0.5398 | 9.09 | 1800 | 0.5457 | 0.7394 | 0.7415 |
| 0.537 | 10.1 | 2000 | 0.5466 | 0.7365 | 0.7396 |
| 0.5346 | 11.11 | 2200 | 0.5488 | 0.7368 | 0.7405 |
| 0.5337 | 12.12 | 2400 | 0.5401 | 0.7417 | 0.7434 |
| 0.5313 | 13.13 | 2600 | 0.5446 | 0.7377 | 0.7415 |
| 0.5303 | 14.14 | 2800 | 0.5412 | 0.7415 | 0.7434 |
| 0.5242 | 15.15 | 3000 | 0.5425 | 0.7406 | 0.7431 |
| 0.5251 | 16.16 | 3200 | 0.5433 | 0.7423 | 0.7453 |
| 0.5237 | 17.17 | 3400 | 0.5418 | 0.7418 | 0.7440 |
| 0.5246 | 18.18 | 3600 | 0.5372 | 0.7432 | 0.7453 |
| 0.5194 | 19.19 | 3800 | 0.5377 | 0.7475 | 0.7497 |
| 0.5211 | 20.2 | 4000 | 0.5385 | 0.7427 | 0.7446 |
| 0.5179 | 21.21 | 4200 | 0.5468 | 0.7387 | 0.7431 |
| 0.5152 | 22.22 | 4400 | 0.5465 | 0.7416 | 0.7443 |
| 0.5143 | 23.23 | 4600 | 0.5433 | 0.7416 | 0.7449 |
| 0.5191 | 24.24 | 4800 | 0.5417 | 0.7399 | 0.7440 |
| 0.5093 | 25.25 | 5000 | 0.5418 | 0.7424 | 0.7440 |
| 0.5149 | 26.26 | 5200 | 0.5413 | 0.7411 | 0.7418 |
| 0.511 | 27.27 | 5400 | 0.5416 | 0.7398 | 0.7431 |
| 0.5087 | 28.28 | 5600 | 0.5408 | 0.7425 | 0.7440 |
| 0.5131 | 29.29 | 5800 | 0.5441 | 0.7343 | 0.7386 |
| 0.5082 | 30.3 | 6000 | 0.5400 | 0.7442 | 0.7465 |
| 0.5096 | 31.31 | 6200 | 0.5374 | 0.7441 | 0.7459 |
| 0.5074 | 32.32 | 6400 | 0.5426 | 0.7397 | 0.7434 |
| 0.5043 | 33.33 | 6600 | 0.5466 | 0.7361 | 0.7399 |
| 0.5064 | 34.34 | 6800 | 0.5400 | 0.7433 | 0.7456 |
| 0.5019 | 35.35 | 7000 | 0.5420 | 0.7455 | 0.7475 |
| 0.5059 | 36.36 | 7200 | 0.5414 | 0.7368 | 0.7393 |
| 0.5031 | 37.37 | 7400 | 0.5413 | 0.7359 | 0.7386 |
| 0.5008 | 38.38 | 7600 | 0.5407 | 0.7507 | 0.7519 |
| 0.5006 | 39.39 | 7800 | 0.5429 | 0.7491 | 0.7503 |
| 0.5031 | 40.4 | 8000 | 0.5434 | 0.7473 | 0.7491 |
| 0.4971 | 41.41 | 8200 | 0.5460 | 0.7354 | 0.7386 |
| 0.5052 | 42.42 | 8400 | 0.5436 | 0.7414 | 0.7443 |
| 0.4996 | 43.43 | 8600 | 0.5441 | 0.7429 | 0.7453 |
| 0.4995 | 44.44 | 8800 | 0.5429 | 0.7371 | 0.7399 |
| 0.4962 | 45.45 | 9000 | 0.5444 | 0.7436 | 0.7456 |
| 0.5 | 46.46 | 9200 | 0.5433 | 0.7406 | 0.7431 |
| 0.5035 | 47.47 | 9400 | 0.5420 | 0.7393 | 0.7421 |
| 0.4937 | 48.48 | 9600 | 0.5442 | 0.7398 | 0.7424 |
| 0.5019 | 49.49 | 9800 | 0.5431 | 0.7401 | 0.7427 |
| 0.5008 | 50.51 | 10000 | 0.5430 | 0.7399 | 0.7424 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_15M", "model-index": [{"name": "GUE_EMP_H3K4me1-seqsight_4096_512_15M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K4me1-seqsight_4096_512_15M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_15M",
"region:us"
] | null | 2024-05-03T18:27:29+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us
| GUE\_EMP\_H3K4me1-seqsight\_4096\_512\_15M-L8\_f
================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_15M on the mahdibaghbanzadeh/GUE\_EMP\_H3K4me1 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5390
* F1 Score: 0.7439
* Accuracy: 0.7456
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
43,
100,
5,
52
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K4me1-seqsight_4096_512_15M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_15M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_15M) on the [mahdibaghbanzadeh/GUE_EMP_H3K4me1](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K4me1) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5349
- F1 Score: 0.7496
- Accuracy: 0.7519
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.6167 | 1.01 | 200 | 0.5875 | 0.7086 | 0.7124 |
| 0.5741 | 2.02 | 400 | 0.5668 | 0.7256 | 0.7270 |
| 0.56 | 3.03 | 600 | 0.5549 | 0.7310 | 0.7336 |
| 0.5517 | 4.04 | 800 | 0.5546 | 0.7288 | 0.7323 |
| 0.5457 | 5.05 | 1000 | 0.5462 | 0.7399 | 0.7405 |
| 0.5396 | 6.06 | 1200 | 0.5461 | 0.7309 | 0.7348 |
| 0.5332 | 7.07 | 1400 | 0.5534 | 0.7345 | 0.7377 |
| 0.5307 | 8.08 | 1600 | 0.5408 | 0.7427 | 0.7449 |
| 0.5252 | 9.09 | 1800 | 0.5410 | 0.7448 | 0.7468 |
| 0.521 | 10.1 | 2000 | 0.5407 | 0.7432 | 0.7465 |
| 0.5149 | 11.11 | 2200 | 0.5450 | 0.7368 | 0.7415 |
| 0.5142 | 12.12 | 2400 | 0.5335 | 0.7492 | 0.7509 |
| 0.5089 | 13.13 | 2600 | 0.5410 | 0.7376 | 0.7421 |
| 0.5066 | 14.14 | 2800 | 0.5436 | 0.7433 | 0.7456 |
| 0.5 | 15.15 | 3000 | 0.5391 | 0.7444 | 0.7472 |
| 0.4968 | 16.16 | 3200 | 0.5414 | 0.7459 | 0.7478 |
| 0.4947 | 17.17 | 3400 | 0.5547 | 0.7463 | 0.7481 |
| 0.4935 | 18.18 | 3600 | 0.5397 | 0.7446 | 0.7468 |
| 0.4857 | 19.19 | 3800 | 0.5395 | 0.7487 | 0.7506 |
| 0.4849 | 20.2 | 4000 | 0.5424 | 0.7470 | 0.7494 |
| 0.4814 | 21.21 | 4200 | 0.5429 | 0.7459 | 0.7481 |
| 0.4779 | 22.22 | 4400 | 0.5572 | 0.7373 | 0.7399 |
| 0.4755 | 23.23 | 4600 | 0.5543 | 0.7482 | 0.7503 |
| 0.478 | 24.24 | 4800 | 0.5531 | 0.7394 | 0.7440 |
| 0.4669 | 25.25 | 5000 | 0.5558 | 0.7452 | 0.7465 |
| 0.4702 | 26.26 | 5200 | 0.5527 | 0.7445 | 0.7453 |
| 0.4662 | 27.27 | 5400 | 0.5539 | 0.7381 | 0.7421 |
| 0.4608 | 28.28 | 5600 | 0.5517 | 0.7407 | 0.7431 |
| 0.464 | 29.29 | 5800 | 0.5538 | 0.7400 | 0.7440 |
| 0.4595 | 30.3 | 6000 | 0.5603 | 0.7441 | 0.7468 |
| 0.4568 | 31.31 | 6200 | 0.5496 | 0.7450 | 0.7462 |
| 0.4523 | 32.32 | 6400 | 0.5671 | 0.7443 | 0.7472 |
| 0.4515 | 33.33 | 6600 | 0.5685 | 0.7358 | 0.7393 |
| 0.4515 | 34.34 | 6800 | 0.5595 | 0.7398 | 0.7415 |
| 0.4434 | 35.35 | 7000 | 0.5682 | 0.7472 | 0.7487 |
| 0.4484 | 36.36 | 7200 | 0.5599 | 0.7362 | 0.7383 |
| 0.4434 | 37.37 | 7400 | 0.5668 | 0.7375 | 0.7402 |
| 0.4377 | 38.38 | 7600 | 0.5732 | 0.7421 | 0.7431 |
| 0.4384 | 39.39 | 7800 | 0.5768 | 0.7416 | 0.7431 |
| 0.4402 | 40.4 | 8000 | 0.5728 | 0.7425 | 0.7440 |
| 0.4338 | 41.41 | 8200 | 0.5753 | 0.7361 | 0.7393 |
| 0.4418 | 42.42 | 8400 | 0.5750 | 0.7373 | 0.7396 |
| 0.435 | 43.43 | 8600 | 0.5767 | 0.7365 | 0.7386 |
| 0.4359 | 44.44 | 8800 | 0.5709 | 0.7400 | 0.7421 |
| 0.4303 | 45.45 | 9000 | 0.5781 | 0.7416 | 0.7434 |
| 0.4326 | 46.46 | 9200 | 0.5765 | 0.7398 | 0.7418 |
| 0.4349 | 47.47 | 9400 | 0.5723 | 0.7372 | 0.7396 |
| 0.4267 | 48.48 | 9600 | 0.5781 | 0.7409 | 0.7434 |
| 0.4332 | 49.49 | 9800 | 0.5762 | 0.7382 | 0.7405 |
| 0.4295 | 50.51 | 10000 | 0.5774 | 0.7398 | 0.7421 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_15M", "model-index": [{"name": "GUE_EMP_H3K4me1-seqsight_4096_512_15M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K4me1-seqsight_4096_512_15M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_15M",
"region:us"
] | null | 2024-05-03T18:27:29+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us
| GUE\_EMP\_H3K4me1-seqsight\_4096\_512\_15M-L32\_f
=================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_15M on the mahdibaghbanzadeh/GUE\_EMP\_H3K4me1 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5349
* F1 Score: 0.7496
* Accuracy: 0.7519
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
43,
100,
5,
52
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | null |
## Model Details
We introduce Llama3-ChatQA-1.5, which excels at conversational question answering (QA) and retrieval-augmented generation (RAG). Llama3-ChatQA-1.5 is developed using an improved training recipe from [ChatQA (1.0)](https://arxiv.org/abs/2401.10225), and it is built on top of [Llama-3 base model](https://huggingface.co/meta-llama/Meta-Llama-3-8B). Specifically, we incorporate more conversational QA data to enhance its tabular and arithmetic calculation capability. Llama3-ChatQA-1.5 has two variants: Llama3-ChatQA-1.5-8B and Llama3-ChatQA-1.5-70B. Both models were originally trained using [Megatron-LM](https://github.com/NVIDIA/Megatron-LM), we converted the checkpoints to Hugging Face format.
## Other Resources
[Llama3-ChatQA-1.5-8B](https://huggingface.co/nvidia/Llama3-ChatQA-1.5-8B)   [Evaluation Data](https://huggingface.co/datasets/nvidia/ConvRAG-Bench)   [Training Data](https://huggingface.co/datasets/nvidia/ChatQA-Training-Data)   [Retriever](https://huggingface.co/nvidia/dragon-multiturn-query-encoder)
## Benchmark Results
Results in ConvRAG Bench are as follows:
| | ChatQA-1.0-7B | Command-R-Plus | Llama-3-instruct-70b | GPT-4-0613 | ChatQA-1.0-70B | ChatQA-1.5-8B | ChatQA-1.5-70B |
| -- |:--:|:--:|:--:|:--:|:--:|:--:|:--:|
| Doc2Dial | 37.88 | 33.51 | 37.88 | 34.16 | 38.9 | 39.33 | 41.26 |
| QuAC | 29.69 | 34.16 | 36.96 | 40.29 | 41.82 | 39.73 | 38.82 |
| QReCC | 46.97 | 49.77 | 51.34 | 52.01 | 48.05 | 49.03 | 51.40 |
| CoQA | 76.61 | 69.71 | 76.98 | 77.42 | 78.57 | 76.46 | 78.44 |
| DoQA | 41.57 | 40.67 | 41.24 | 43.39 | 51.94 | 49.6 | 50.67 |
| ConvFinQA | 51.61 | 71.21 | 76.6 | 81.28 | 73.69 | 78.46 | 81.88 |
| SQA | 61.87 | 74.07 | 69.61 | 79.21 | 69.14 | 73.28 | 83.82 |
| TopioCQA | 45.45 | 53.77 | 49.72 | 45.09 | 50.98 | 49.96 | 55.63 |
| HybriDial* | 54.51 | 46.7 | 48.59 | 49.81 | 56.44 | 65.76 | 68.27 |
| INSCIT | 30.96 | 35.76 | 36.23 | 36.34 | 31.9 | 30.1 | 32.31 |
| Average (all) | 47.71 | 50.93 | 52.52 | 53.90 | 54.14 | 55.17 | 58.25 |
| Average (exclude HybriDial) | 46.96 | 51.40 | 52.95 | 54.35 | 53.89 | 53.99 | 57.14 |
Note that ChatQA-1.5 is built based on Llama-3 base model, and ChatQA-1.0 is built based on Llama-2 base model. ChatQA-1.5 used some samples from the HybriDial training dataset. To ensure fair comparison, we also compare average scores excluding HybriDial. The data and evaluation scripts for ConvRAG can be found [here](https://huggingface.co/datasets/nvidia/ConvRAG-Bench).
## Prompt Format
<pre>
System: {System}
{Context}
User: {Question}
Assistant: {Response}
User: {Question}
Assistant:
</pre>
## How to use
### take the whole document as context
This can be applied to the scenario where the whole document can be fitted into the model, so that there is no need to run retrieval over the document.
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = "nvidia/Llama3-ChatQA-1.5-70B"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float16, device_map="auto")
messages = [
{"role": "user", "content": "what is the percentage change of the net income from Q4 FY23 to Q4 FY24?"}
]
document = """NVIDIA (NASDAQ: NVDA) today reported revenue for the fourth quarter ended January 28, 2024, of $22.1 billion, up 22% from the previous quarter and up 265% from a year ago.\nFor the quarter, GAAP earnings per diluted share was $4.93, up 33% from the previous quarter and up 765% from a year ago. Non-GAAP earnings per diluted share was $5.16, up 28% from the previous quarter and up 486% from a year ago.\nQ4 Fiscal 2024 Summary\nGAAP\n| $ in millions, except earnings per share | Q4 FY24 | Q3 FY24 | Q4 FY23 | Q/Q | Y/Y |\n| Revenue | $22,103 | $18,120 | $6,051 | Up 22% | Up 265% |\n| Gross margin | 76.0% | 74.0% | 63.3% | Up 2.0 pts | Up 12.7 pts |\n| Operating expenses | $3,176 | $2,983 | $2,576 | Up 6% | Up 23% |\n| Operating income | $13,615 | $10,417 | $1,257 | Up 31% | Up 983% |\n| Net income | $12,285 | $9,243 | $1,414 | Up 33% | Up 769% |\n| Diluted earnings per share | $4.93 | $3.71 | $0.57 | Up 33% | Up 765% |"""
def get_formatted_input(messages, context):
system = "System: This is a chat between a user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions based on the context. The assistant should also indicate when the answer cannot be found in the context."
instruction = "Please give a full and complete answer for the question."
for item in messages:
if item['role'] == "user":
## only apply this instruction for the first user turn
item['content'] = instruction + " " + item['content']
break
conversation = '\n\n'.join(["User: " + item["content"] if item["role"] == "user" else "Assistant: " + item["content"] for item in messages]) + "\n\nAssistant:"
formatted_input = system + "\n\n" + context + "\n\n" + conversation
return formatted_input
formatted_input = get_formatted_input(messages, document)
tokenized_prompt = tokenizer(tokenizer.bos_token + formatted_input, return_tensors="pt").to(model.device)
terminators = [
tokenizer.eos_token_id,
tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = model.generate(input_ids=tokenized_prompt.input_ids, attention_mask=tokenized_prompt.attention_mask, max_new_tokens=128, eos_token_id=terminators)
response = outputs[0][tokenized_prompt.input_ids.shape[-1]:]
print(tokenizer.decode(response, skip_special_tokens=True))
```
### run retrieval to get top-n chunks as context
This can be applied to the scenario when the document is very long, so that it is necessary to run retrieval. Here, we use our [Dragon-multiturn](https://huggingface.co/nvidia/dragon-multiturn-query-encoder) retriever which can handle conversatinoal query. In addition, we provide a few [documents](https://huggingface.co/nvidia/Llama3-ChatQA-1.5-70B/tree/main/docs) for users to play with.
```python
from transformers import AutoTokenizer, AutoModelForCausalLM, AutoModel
import torch
import json
## load ChatQA-1.5 tokenizer and model
model_id = "nvidia/Llama3-ChatQA-1.5-70B"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float16, device_map="auto")
## load retriever tokenizer and model
retriever_tokenizer = AutoTokenizer.from_pretrained('nvidia/dragon-multiturn-query-encoder')
query_encoder = AutoModel.from_pretrained('nvidia/dragon-multiturn-query-encoder')
context_encoder = AutoModel.from_pretrained('nvidia/dragon-multiturn-context-encoder')
## prepare documents, we take landrover car manual document that we provide as an example
chunk_list = json.load(open("docs.json"))['landrover']
messages = [
{"role": "user", "content": "how to connect the bluetooth in the car?"}
]
### running retrieval
## convert query into a format as follows:
## user: {user}\nagent: {agent}\nuser: {user}
formatted_query_for_retriever = '\n'.join([turn['role'] + ": " + turn['content'] for turn in messages]).strip()
query_input = retriever_tokenizer(formatted_query_for_retriever, return_tensors='pt')
ctx_input = retriever_tokenizer(chunk_list, padding=True, truncation=True, max_length=512, return_tensors='pt')
query_emb = query_encoder(**query_input).last_hidden_state[:, 0, :]
ctx_emb = context_encoder(**ctx_input).last_hidden_state[:, 0, :]
## Compute similarity scores using dot product and rank the similarity
similarities = query_emb.matmul(ctx_emb.transpose(0, 1)) # (1, num_ctx)
ranked_results = torch.argsort(similarities, dim=-1, descending=True) # (1, num_ctx)
## get top-n chunks (n=5)
retrieved_chunks = [chunk_list[idx] for idx in ranked_results.tolist()[0][:5]]
context = "\n\n".join(retrieved_chunks)
### running text generation
formatted_input = get_formatted_input(messages, context)
tokenized_prompt = tokenizer(tokenizer.bos_token + formatted_input, return_tensors="pt").to(model.device)
terminators = [
tokenizer.eos_token_id,
tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = model.generate(input_ids=tokenized_prompt.input_ids, attention_mask=tokenized_prompt.attention_mask, max_new_tokens=128, eos_token_id=terminators)
response = outputs[0][tokenized_prompt.input_ids.shape[-1]:]
print(tokenizer.decode(response, skip_special_tokens=True))
```
## Correspondence to
Zihan Liu (zihanl@nvidia.com), Wei Ping (wping@nvidia.com)
## Citation
<pre>
@article{liu2024chatqa,
title={ChatQA: Building GPT-4 Level Conversational QA Models},
author={Liu, Zihan and Ping, Wei and Roy, Rajarshi and Xu, Peng and Lee, Chankyu and Shoeybi, Mohammad and Catanzaro, Bryan},
journal={arXiv preprint arXiv:2401.10225},
year={2024}}
</pre>
## License
The use of this model is governed by the [META LLAMA 3 COMMUNITY LICENSE AGREEMENT](https://llama.meta.com/llama3/license/)
| {"language": ["en"], "license": "llama3", "tags": ["nvidia", "chatqa-1.5", "chatqa", "llama-3", "pytorch"], "pipeline_tag": "text-generation"} | LoneStriker/Llama3-ChatQA-1.5-70B-GGUF | null | [
"region:us"
] | null | 2024-05-03T18:28:20+00:00 | [] | [
"en"
] | TAGS
#region-us
| Model Details
-------------
We introduce Llama3-ChatQA-1.5, which excels at conversational question answering (QA) and retrieval-augmented generation (RAG). Llama3-ChatQA-1.5 is developed using an improved training recipe from ChatQA (1.0), and it is built on top of Llama-3 base model. Specifically, we incorporate more conversational QA data to enhance its tabular and arithmetic calculation capability. Llama3-ChatQA-1.5 has two variants: Llama3-ChatQA-1.5-8B and Llama3-ChatQA-1.5-70B. Both models were originally trained using Megatron-LM, we converted the checkpoints to Hugging Face format.
Other Resources
---------------
Llama3-ChatQA-1.5-8B Evaluation Data Training Data Retriever
Benchmark Results
-----------------
Results in ConvRAG Bench are as follows:
Note that ChatQA-1.5 is built based on Llama-3 base model, and ChatQA-1.0 is built based on Llama-2 base model. ChatQA-1.5 used some samples from the HybriDial training dataset. To ensure fair comparison, we also compare average scores excluding HybriDial. The data and evaluation scripts for ConvRAG can be found here.
Prompt Format
-------------
```
System: {System}
{Context}
User: {Question}
Assistant: {Response}
User: {Question}
Assistant:
```
How to use
----------
### take the whole document as context
This can be applied to the scenario where the whole document can be fitted into the model, so that there is no need to run retrieval over the document.
### run retrieval to get top-n chunks as context
This can be applied to the scenario when the document is very long, so that it is necessary to run retrieval. Here, we use our Dragon-multiturn retriever which can handle conversatinoal query. In addition, we provide a few documents for users to play with.
Correspondence to
-----------------
Zihan Liu (zihanl@URL), Wei Ping (wping@URL)
```
@article{liu2024chatqa,
title={ChatQA: Building GPT-4 Level Conversational QA Models},
author={Liu, Zihan and Ping, Wei and Roy, Rajarshi and Xu, Peng and Lee, Chankyu and Shoeybi, Mohammad and Catanzaro, Bryan},
journal={arXiv preprint arXiv:2401.10225},
year={2024}}
```
License
-------
The use of this model is governed by the META LLAMA 3 COMMUNITY LICENSE AGREEMENT
| [
"### take the whole document as context\n\n\nThis can be applied to the scenario where the whole document can be fitted into the model, so that there is no need to run retrieval over the document.",
"### run retrieval to get top-n chunks as context\n\n\nThis can be applied to the scenario when the document is very long, so that it is necessary to run retrieval. Here, we use our Dragon-multiturn retriever which can handle conversatinoal query. In addition, we provide a few documents for users to play with.\n\n\nCorrespondence to\n-----------------\n\n\nZihan Liu (zihanl@URL), Wei Ping (wping@URL)\n\n\n\n```\n\n@article{liu2024chatqa,\n title={ChatQA: Building GPT-4 Level Conversational QA Models},\n author={Liu, Zihan and Ping, Wei and Roy, Rajarshi and Xu, Peng and Lee, Chankyu and Shoeybi, Mohammad and Catanzaro, Bryan},\n journal={arXiv preprint arXiv:2401.10225},\n year={2024}}\n\n```\n\nLicense\n-------\n\n\nThe use of this model is governed by the META LLAMA 3 COMMUNITY LICENSE AGREEMENT"
] | [
"TAGS\n#region-us \n",
"### take the whole document as context\n\n\nThis can be applied to the scenario where the whole document can be fitted into the model, so that there is no need to run retrieval over the document.",
"### run retrieval to get top-n chunks as context\n\n\nThis can be applied to the scenario when the document is very long, so that it is necessary to run retrieval. Here, we use our Dragon-multiturn retriever which can handle conversatinoal query. In addition, we provide a few documents for users to play with.\n\n\nCorrespondence to\n-----------------\n\n\nZihan Liu (zihanl@URL), Wei Ping (wping@URL)\n\n\n\n```\n\n@article{liu2024chatqa,\n title={ChatQA: Building GPT-4 Level Conversational QA Models},\n author={Liu, Zihan and Ping, Wei and Roy, Rajarshi and Xu, Peng and Lee, Chankyu and Shoeybi, Mohammad and Catanzaro, Bryan},\n journal={arXiv preprint arXiv:2401.10225},\n year={2024}}\n\n```\n\nLicense\n-------\n\n\nThe use of this model is governed by the META LLAMA 3 COMMUNITY LICENSE AGREEMENT"
] | [
5,
40,
240
] | [
"TAGS\n#region-us \n### take the whole document as context\n\n\nThis can be applied to the scenario where the whole document can be fitted into the model, so that there is no need to run retrieval over the document.### run retrieval to get top-n chunks as context\n\n\nThis can be applied to the scenario when the document is very long, so that it is necessary to run retrieval. Here, we use our Dragon-multiturn retriever which can handle conversatinoal query. In addition, we provide a few documents for users to play with.\n\n\nCorrespondence to\n-----------------\n\n\nZihan Liu (zihanl@URL), Wei Ping (wping@URL)\n\n\n\n```\n\n@article{liu2024chatqa,\n title={ChatQA: Building GPT-4 Level Conversational QA Models},\n author={Liu, Zihan and Ping, Wei and Roy, Rajarshi and Xu, Peng and Lee, Chankyu and Shoeybi, Mohammad and Catanzaro, Bryan},\n journal={arXiv preprint arXiv:2401.10225},\n year={2024}}\n\n```\n\nLicense\n-------\n\n\nThe use of this model is governed by the META LLAMA 3 COMMUNITY LICENSE AGREEMENT"
] |
text-generation | transformers |
<img src="./llama-3-merges.webp" alt="Llama-3 DPO Logo" width="500" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
# MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.4
This model is a fine-tune (DPO) of `meta-llama/Meta-Llama-3-70B-Instruct` model.
# Quantized GGUF
All GGUF models are available here: [MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.4-GGUF](https://huggingface.co/MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.4-GGUF)
# Prompt Template
This model uses `ChatML` prompt template:
```
<|im_start|>system
{System}
<|im_end|>
<|im_start|>user
{User}
<|im_end|>
<|im_start|>assistant
{Assistant}
````
# How to use
You can use this model by using `MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.4` as the model name in Hugging Face's
transformers library.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
from transformers import pipeline
import torch
model_id = "MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.4"
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
trust_remote_code=True,
# attn_implementation="flash_attention_2"
)
tokenizer = AutoTokenizer.from_pretrained(
model_id,
trust_remote_code=True
)
streamer = TextStreamer(tokenizer)
pipeline = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
model_kwargs={"torch_dtype": torch.bfloat16},
streamer=streamer
)
# Then you can use the pipeline to generate text.
messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Who are you?"},
]
prompt = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
terminators = [
tokenizer.eos_token_id,
tokenizer.convert_tokens_to_ids("<|im_end|>"),
tokenizer.convert_tokens_to_ids("<|eot_id|>") # safer to have this too
]
outputs = pipeline(
prompt,
max_new_tokens=2048,
eos_token_id=terminators,
do_sample=True,
temperature=0.6,
top_p=0.95,
)
print(outputs[0]["generated_text"][len(prompt):])
```
## Sample Response
```
<|im_start|>system
You are a helpful assistant.<|im_end|>
<|im_start|>user
describe about pros and cons of docker system.<|im_end|>
<|im_start|>assistant
```
```
Here are the pros and cons of the Docker system:
**Pros:**
1. **Lightweight**: Docker containers are lightweight and use fewer resources compared to traditional virtual machines.
2. **Isolation**: Docker provides a high level of isolation between containers, ensuring that each application runs independently without interfering with others.
3. **Portability**: Docker containers are highly portable and can run on any system that supports Docker, without requiring specific dependencies or configurations.
4. **Efficient Use of Resources**: Docker containers can share the same kernel and use fewer resources, making them more efficient than traditional virtual machines.
5. **Fast Deployment**: Docker containers can be spun up quickly, making it ideal for rapid deployment and scaling of applications.
6. **Easy Maintenance**: Docker provides a simple way to manage and update applications, with features like rolling updates and rollbacks.
7. **Security**: Docker provides a secure environment for applications, with features like network isolation and encryption.
8. **Large Ecosystem**: Docker has a large and active ecosystem, with a wide range of tools and plugins available for various tasks.
9. **Version Control**: Docker provides an easy way to manage different versions of an application, with features like version tagging and rollbacks.
10. **Cost-Effective**: Docker can help reduce costs by minimizing the need for virtual machines and reducing the overhead of resource allocation.
**Cons:**
1. **Steep Learning Curve**: Docker has a complex architecture and requires a good understanding of containerization, networking, and storage.
2. **Overhead**: While Docker containers are lightweight, they still require some overhead in terms of resources and management.
3. **Security Risks**: If not properly configured, Docker containers can introduce security risks, such as unauthorized access to the host system.
4. **Limited Support for GUI Applications**: Docker is primarily designed for running command-line applications, and running GUI applications can be challenging.
5. **Networking Complexity**: Docker's networking model can be complex, especially in large-scale deployments.
6. **Storage Complexity**: Docker's storage model can be complex, especially when dealing with persistent data.
7. **Limited Support for Legacy Applications**: Docker is designed for modern, cloud-native applications, and may not be suitable for legacy applications that rely on older technologies.
8. **Dependence on Docker Hub**: Docker relies heavily on Docker Hub, which can be a single point of failure if the hub is down or experiencing issues.
9. **Orchestration Complexity**: Docker provides a built-in orchestration tool, Docker Swarm, but it can be complex to set up and manage.
10. **Vendor Lock-in**: Docker is a proprietary technology, and while it has a large ecosystem, it can be difficult to switch to alternative containerization platforms.
Overall, Docker provides a powerful and flexible way to deploy and manage applications, but it requires careful planning, configuration, and management to ensure optimal performance and security.
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_MaziyarPanahi__Llama-3-70B-Instruct-DPO-v0.4)
| Metric |Value|
|---------------------------------|----:|
|Avg. |78.89|
|AI2 Reasoning Challenge (25-Shot)|72.61|
|HellaSwag (10-Shot) |86.03|
|MMLU (5-Shot) |80.50|
|TruthfulQA (0-shot) |63.26|
|Winogrande (5-shot) |83.58|
|GSM8k (5-shot) |87.34|
| {"language": ["en"], "license": "llama3", "library_name": "transformers", "tags": ["axolotl", "finetune", "dpo", "facebook", "meta", "pytorch", "llama", "llama-3", "chatml"], "datasets": ["argilla/ultrafeedback-binarized-preferences"], "base_model": "meta-llama/Meta-Llama-3-70B-Instruct", "pipeline_tag": "text-generation", "license_name": "llama3", "license_link": "LICENSE", "inference": false, "model_creator": "MaziyarPanahi", "quantized_by": "MaziyarPanahi", "model-index": [{"name": "Llama-3-70B-Instruct-DPO-v0.4", "results": [{"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "AI2 Reasoning Challenge (25-Shot)", "type": "ai2_arc", "config": "ARC-Challenge", "split": "test", "args": {"num_few_shot": 25}}, "metrics": [{"type": "acc_norm", "value": 72.61, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.4", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "HellaSwag (10-Shot)", "type": "hellaswag", "split": "validation", "args": {"num_few_shot": 10}}, "metrics": [{"type": "acc_norm", "value": 86.03, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.4", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "MMLU (5-Shot)", "type": "cais/mmlu", "config": "all", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 80.5, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.4", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "TruthfulQA (0-shot)", "type": "truthful_qa", "config": "multiple_choice", "split": "validation", "args": {"num_few_shot": 0}}, "metrics": [{"type": "mc2", "value": 63.26}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.4", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "Winogrande (5-shot)", "type": "winogrande", "config": "winogrande_xl", "split": "validation", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 83.58, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.4", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "GSM8k (5-shot)", "type": "gsm8k", "config": "main", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 87.34, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.4", "name": "Open LLM Leaderboard"}}]}]} | blockblockblock/Llama-3-70B-Instruct-DPO-v0.4-bpw3-exl2 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"axolotl",
"finetune",
"dpo",
"facebook",
"meta",
"pytorch",
"llama-3",
"chatml",
"conversational",
"en",
"dataset:argilla/ultrafeedback-binarized-preferences",
"base_model:meta-llama/Meta-Llama-3-70B-Instruct",
"license:llama3",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"3-bit",
"region:us"
] | null | 2024-05-03T18:28:38+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #llama #text-generation #axolotl #finetune #dpo #facebook #meta #pytorch #llama-3 #chatml #conversational #en #dataset-argilla/ultrafeedback-binarized-preferences #base_model-meta-llama/Meta-Llama-3-70B-Instruct #license-llama3 #model-index #autotrain_compatible #text-generation-inference #3-bit #region-us
| ![Llama-3 DPO Logo](./URL)
MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.4
===========================================
This model is a fine-tune (DPO) of 'meta-llama/Meta-Llama-3-70B-Instruct' model.
Quantized GGUF
==============
All GGUF models are available here: MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.4-GGUF
Prompt Template
===============
This model uses 'ChatML' prompt template:
'
How to use
==========
You can use this model by using 'MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.4' as the model name in Hugging Face's
transformers library.
Sample Response
---------------
Open LLM Leaderboard Evaluation Results
=======================================
Detailed results can be found here
| [] | [
"TAGS\n#transformers #safetensors #llama #text-generation #axolotl #finetune #dpo #facebook #meta #pytorch #llama-3 #chatml #conversational #en #dataset-argilla/ultrafeedback-binarized-preferences #base_model-meta-llama/Meta-Llama-3-70B-Instruct #license-llama3 #model-index #autotrain_compatible #text-generation-inference #3-bit #region-us \n"
] | [
116
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #axolotl #finetune #dpo #facebook #meta #pytorch #llama-3 #chatml #conversational #en #dataset-argilla/ultrafeedback-binarized-preferences #base_model-meta-llama/Meta-Llama-3-70B-Instruct #license-llama3 #model-index #autotrain_compatible #text-generation-inference #3-bit #region-us \n"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | golf2248/78a0jm6 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-03T18:28:52+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
47,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |
null | peft |
**Note**: This model card has been generated automatically according to the information the Trainer had access to.
Visit the [model card](https://ritvik19.github.io/zephyr-mini/) to see the full description.
# zephyr-1b-olmo-sft-qlora
This model is a fine-tuned version of [allenai/OLMo-1B-hf](https://huggingface.co/allenai/OLMo-1B-hf) on the HuggingFaceH4/ultrachat_200k dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3126
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 128
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.3184 | 1.0 | 1001 | 1.3126 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.40.1
- Pytorch 2.1.2+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1 | {"license": "apache-2.0", "library_name": "peft", "tags": ["alignment-handbook", "generated_from_trainer"], "datasets": ["HuggingFaceH4/ultrachat_200k"], "base_model": "allenai/OLMo-1B-hf", "model-index": [{"name": "zephyr-1b-olmo-sft-qlora", "results": []}]} | Ritvik19/zephyr-1b-olmo-sft-qlora | null | [
"peft",
"safetensors",
"olmo",
"alignment-handbook",
"generated_from_trainer",
"dataset:HuggingFaceH4/ultrachat_200k",
"base_model:allenai/OLMo-1B-hf",
"license:apache-2.0",
"region:us"
] | null | 2024-05-03T18:29:27+00:00 | [] | [] | TAGS
#peft #safetensors #olmo #alignment-handbook #generated_from_trainer #dataset-HuggingFaceH4/ultrachat_200k #base_model-allenai/OLMo-1B-hf #license-apache-2.0 #region-us
| Note: This model card has been generated automatically according to the information the Trainer had access to.
Visit the model card to see the full description.
zephyr-1b-olmo-sft-qlora
========================
This model is a fine-tuned version of allenai/OLMo-1B-hf on the HuggingFaceH4/ultrachat\_200k dataset.
It achieves the following results on the evaluation set:
* Loss: 1.3126
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0002
* train\_batch\_size: 1
* eval\_batch\_size: 1
* seed: 42
* distributed\_type: multi-GPU
* gradient\_accumulation\_steps: 128
* total\_train\_batch\_size: 128
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: cosine
* lr\_scheduler\_warmup\_ratio: 0.1
* num\_epochs: 1
### Training results
### Framework versions
* PEFT 0.7.1
* Transformers 4.40.1
* Pytorch 2.1.2+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* distributed\\_type: multi-GPU\n* gradient\\_accumulation\\_steps: 128\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 1",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.7.1\n* Transformers 4.40.1\n* Pytorch 2.1.2+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#peft #safetensors #olmo #alignment-handbook #generated_from_trainer #dataset-HuggingFaceH4/ultrachat_200k #base_model-allenai/OLMo-1B-hf #license-apache-2.0 #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* distributed\\_type: multi-GPU\n* gradient\\_accumulation\\_steps: 128\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 1",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.7.1\n* Transformers 4.40.1\n* Pytorch 2.1.2+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
63,
154,
5,
52
] | [
"TAGS\n#peft #safetensors #olmo #alignment-handbook #generated_from_trainer #dataset-HuggingFaceH4/ultrachat_200k #base_model-allenai/OLMo-1B-hf #license-apache-2.0 #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* distributed\\_type: multi-GPU\n* gradient\\_accumulation\\_steps: 128\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 1### Training results### Framework versions\n\n\n* PEFT 0.7.1\n* Transformers 4.40.1\n* Pytorch 2.1.2+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
text-generation | transformers | # merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* /content/drive/MyDrive/dawg
* [Undi95/UtopiaXL-13B](https://huggingface.co/Undi95/UtopiaXL-13B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
slices:
- sources:
- model: /content/drive/MyDrive/dawg
layer_range: [0, 32]
- model: Undi95/UtopiaXL-13B
layer_range: [0, 32]
merge_method: slerp
base_model: /content/drive/MyDrive/dawg
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
| {"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["Undi95/UtopiaXL-13B"]} | akrads/merge | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"base_model:Undi95/UtopiaXL-13B",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-03T18:30:56+00:00 | [] | [] | TAGS
#transformers #safetensors #llama #text-generation #mergekit #merge #base_model-Undi95/UtopiaXL-13B #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| # merge
This is a merge of pre-trained language models created using mergekit.
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* /content/drive/MyDrive/dawg
* Undi95/UtopiaXL-13B
### Configuration
The following YAML configuration was used to produce this model:
| [
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* /content/drive/MyDrive/dawg\n* Undi95/UtopiaXL-13B",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #mergekit #merge #base_model-Undi95/UtopiaXL-13B #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* /content/drive/MyDrive/dawg\n* Undi95/UtopiaXL-13B",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] | [
55,
17,
4,
17,
37,
16
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #mergekit #merge #base_model-Undi95/UtopiaXL-13B #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# merge\n\nThis is a merge of pre-trained language models created using mergekit.## Merge Details### Merge Method\n\nThis model was merged using the SLERP merge method.### Models Merged\n\nThe following models were included in the merge:\n* /content/drive/MyDrive/dawg\n* Undi95/UtopiaXL-13B### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
text-classification | setfit |
# SetFit with sentence-transformers/multi-qa-mpnet-base-cos-v1
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/multi-qa-mpnet-base-cos-v1](https://huggingface.co/sentence-transformers/multi-qa-mpnet-base-cos-v1) as the Sentence Transformer embedding model. A [SetFitHead](huggingface.co/docs/setfit/reference/main#setfit.SetFitHead) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/multi-qa-mpnet-base-cos-v1](https://huggingface.co/sentence-transformers/multi-qa-mpnet-base-cos-v1)
- **Classification head:** a [SetFitHead](huggingface.co/docs/setfit/reference/main#setfit.SetFitHead) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 43 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 5 | <ul><li>'its civilizations before the species is able to develop the technology to communicate with other intelligent species intelligent alien species have not developed advanced technologies it may be that while alien species with intelligence exist they are primitive or have not reached the level of technological advancement necessary to communicate along with nonintelligent life such civilizations would also be very difficult to detect a trip using conventional rockets would take hundreds of thousands of years to reach the nearest starsto skeptics the fact that in the history of life on the earth only one species has developed a civilization to the point of being capable of spaceflight and radio technology lends more credence to the idea that technologically advanced civilizations are rare in the universeanother hypothesis in this category is the water world hypothesis according to author and scientist david brin it turns out that our earth skates the very inner edge of our suns continuously habitable — or goldilocks — zone and earth may be anomalous it may be that because we are so close to our sun we have an anomalously oxygenrich atmosphere and we have anomalously little ocean for a water world in other words 32 percent continental mass may be high among water worlds brin continues in which case the evolution of creatures like us with hands and fire and all that sort of thing may be rare in the galaxy in which case when we do build starships and head out there perhaps well find lots and lots of life worlds but theyre all like polynesia well find lots and lots of intelligent lifeforms out there but theyre all dolphins whales squids who could never build their own starships what a perfect universe for us to be in because nobody would be able to boss us around and wed get to be the voyagers the star trek people the starship builders the policemen and so on it is the nature of intelligent life to destroy itself this is the argument that technological civilizations may usually or invariably destroy themselves before or shortly after developing radio or spaceflight technology the astrophysicist sebastian von hoerner stated that the progress of science and technology on earth was driven by two factors — the struggle for domination and the desire for an easy life the former potentially leads to complete destruction while the latter may lead to biological or mental degeneration possible means of annihilation via major global issues where global interconnectedness actually makes humanity more vulnerable than resilient are many including war accidental environmental contamination or damage the development of biotechnology synthetic life like mirror life resource depletion climate change or poorlydesigned artificial intelligence this general theme is explored both in fiction and in'</li><li>'##s in the range 50 to 500 micrometers of average density 20 gcm3 with porosity about 40 the total influx rate of meteoritic sites of most idps captured in the earths stratosphere range between 1 and 3 gcm3 with an average density at about 20 gcm3other specific dust properties in circumstellar dust astronomers have found molecular signatures of co silicon carbide amorphous silicate polycyclic aromatic hydrocarbons water ice and polyformaldehyde among others in the diffuse interstellar medium there is evidence for silicate and carbon grains cometary dust is generally different with overlap from asteroidal dust asteroidal dust resembles carbonaceous chondritic meteorites cometary dust resembles interstellar grains which can include silicates polycyclic aromatic hydrocarbons and water ice in september 2020 evidence was presented of solidstate water in the interstellar medium and particularly of water ice mixed with silicate grains in cosmic dust grains the large grains in interstellar space are probably complex with refractory cores that condensed within stellar outflows topped by layers acquired during incursions into cold dense interstellar clouds that cyclic process of growth and destruction outside of the clouds has been modeled to demonstrate that the cores live much longer than the average lifetime of dust mass those cores mostly start with silicate particles condensing in the atmospheres of cool oxygenrich redgiants and carbon grains condensing in the atmospheres of cool carbon stars red giants have evolved or altered off the main sequence and have entered the giant phase of their evolution and are the major source of refractory dust grain cores in galaxies those refractory cores are also called stardust section above which is a scientific term for the small fraction of cosmic dust that condensed thermally within stellar gases as they were ejected from the stars several percent of refractory grain cores have condensed within expanding interiors of supernovae a type of cosmic decompression chamber meteoriticists who study refractory stardust extracted from meteorites often call it presolar grains but that within meteorites is only a small fraction of all presolar dust stardust condenses within the stars via considerably different condensation chemistry than that of the bulk of cosmic dust which accretes cold onto preexisting dust in dark molecular clouds of the galaxy those molecular clouds are very cold typically less than 50k so that ices of many kinds may accrete onto grains in cases only to be destroyed or split apart by'</li><li>'##sequilibrium in the geochemical cycle which would point to a reaction happening more or less often than it should a disequilibrium such as this could be interpreted as an indication of life a biosignature must be able to last for long enough so that a probe telescope or human can be able to detect it a consequence of a biological organisms use of metabolic reactions for energy is the production of metabolic waste in addition the structure of an organism can be preserved as a fossil and we know that some fossils on earth are as old as 35 billion years these byproducts can make excellent biosignatures since they provide direct evidence for life however in order to be a viable biosignature a byproduct must subsequently remain intact so that scientists may discover it a biosignature must be detectable with the current technology to be relevant in scientific investigation this seems to be an obvious statement however there are many scenarios in which life may be present on a planet yet remain undetectable because of humancaused limitations false positives every possible biosignature is associated with its own set of unique false positive mechanisms or nonbiological processes that can mimic the detectable feature of a biosignature an important example is using oxygen as a biosignature on earth the majority of life is centred around oxygen it is a byproduct of photosynthesis and is subsequently used by other life forms to breathe oxygen is also readily detectable in spectra with multiple bands across a relatively wide wavelength range therefore it makes a very good biosignature however finding oxygen alone in a planets atmosphere is not enough to confirm a biosignature because of the falsepositive mechanisms associated with it one possibility is that oxygen can build up abiotically via photolysis if there is a low inventory of noncondensable gasses or if it loses a lot of water finding and distinguishing a biosignature from its potential falsepositive mechanisms is one of the most complicated parts of testing for viability because it relies on human ingenuity to break an abioticbiological degeneracy if nature allows false negatives opposite to false positives false negative biosignatures arise in a scenario where life may be present on another planet but some processes on that planet make potential biosignatures undetectable this is an ongoing problem and area of research in preparation for future telescopes that will be capable of observing exoplanetary atmospheres human limitations there are many ways in which humans may limit the viability'</li></ul> |
| 17 | <ul><li>'ice began in 1950 with several expeditions using this drilling approach that year the epf drilled holes of 126 m and 151 m at camp vi and station centrale respectively with a rotary rig with no drilling fluid cores were retrieved from both holes a hole 30 m deep was drilled by a oneton plunger which produced a hole 08 m in diameter which allowed a man to be lowered into the hole to study the stratigraphy ractmadoux and reynauds thermal drilling on the mer de glace in 1949 was interrupted by crevasses moraines or air pockets so when the expedition returned to the glacier in 1950 they switched to mechanical drilling with a motordriven rotary drill using an auger as the drillbit and completed a 114 m hole before reaching the bed of the glacier at four separate locations the deepest of which was 284 m — a record depth at that time the augers were similar in form to blumcke and hesss auger from the early part of the century and ractmadoux and reynaud made several modifications to the design over the course of their expedition attempts to switch to different drillbits to penetrate moraine material they encountered were unsuccessful and a new hole was begun instead in these cases as with blumcke and hess an air gap that did not allow the water'</li><li>'a slightly greener tint than liquid water since absorption is cumulative the color effect intensifies with increasing thickness or if internal reflections cause the light to take a longer path through the iceother colors can appear in the presence of light absorbing impurities where the impurity is dictating the color rather than the ice itself for instance icebergs containing impurities eg sediments algae air bubbles can appear brown grey or greenbecause ice in natural environments is usually close to its melting temperature its hardness shows pronounced temperature variations at its melting point ice has a mohs hardness of 2 or less but the hardness increases to about 4 at a temperature of −44 °c −47 °f and to 6 at a temperature of −785 °c −1093 °f the vaporization point of solid carbon dioxide dry ice ice may be any one of the as of 2021 nineteen known solid crystalline phases of water or in an amorphous solid state at various densitiesmost liquids under increased pressure freeze at higher temperatures because the pressure helps to hold the molecules together however the strong hydrogen bonds in water make it different for some pressures higher than 1 atm 010 mpa water freezes at a temperature below 0 °c as shown in the phase diagram below the melting of ice under high pressures is thought to contribute to the movement of glaciersice water and water vapour can coexist at the triple point which is exactly 27316 k 001 °c at a pressure of 611657 pa the kelvin was defined as 127316 of the difference between this triple point and absolute zero though this definition changed in may 2019 unlike most other solids ice is difficult to superheat in an experiment ice at −3 °c was superheated to about 17 °c for about 250 picosecondssubjected to higher pressures and varying temperatures ice can form in nineteen separate known crystalline phases with care at least fifteen of these phases one of the known exceptions being ice x can be recovered at ambient pressure and low temperature in metastable form the types are differentiated by their crystalline structure proton ordering and density there are also two metastable phases of ice under pressure both fully hydrogendisordered these are iv and xii ice xii was discovered in 1996 in 2006 xiii and xiv were discovered ices xi xiii and xiv are hydrogenordered forms of ices ih v and xii respectively in 2009 ice xv was found at extremely high pressures and −143 °c at even higher pressures ice is predicted to become a metal this has been variously estimated to occur at 155 tpa or 562 tpaas well as'</li><li>'borehole has petrophysical measurements made of the wall rocks and these measurements are repeated along the length of the core then the two data sets correlated one will almost universally find that the depth of record for a particular piece of core differs between the two methods of measurement which set of measurements to believe then becomes a matter of policy for the client in an industrial setting or of great controversy in a context without an overriding authority recording that there are discrepancies for whatever reason retains the possibility of correcting an incorrect decision at a later date destroying the incorrect depth data makes it impossible to correct a mistake later any system for retaining and archiving data and core samples needs to be designed so that dissenting opinion like this can be retained if core samples from a campaign are competent it is common practice to slab them – cut the sample into two or more samples longitudinally – quite early in laboratory processing so that one set of samples can be archived early in the analysis sequence as a protection against errors in processing slabbing the core into a 23 and a 13 set is common it is also common for one set to be retained by the main customer while the second set goes to the government who often impose a condition for such donation as a condition of exploration exploitation licensing slabbing also has the benefit of preparing a flat smooth surface for examination and testing of profile permeability which is very much easier to work with than the typically rough curved surface of core samples when theyre fresh from the coring equipment photography of raw and slabbed core surfaces is routine often under both natural and ultraviolet light a unit of length occasionally used in the literature on seabed cores is cmbsf an abbreviation for centimeters below sea floor the technique of coring long predates attempts to drill into the earth ’ s mantle by the deep sea drilling program the value to oceanic and other geologic history of obtaining cores over a wide area of sea floors soon became apparent core sampling by many scientific and exploratory organizations expanded rapidly to date hundreds of thousands of core samples have been collected from floors of all the planets oceans and many of its inland waters access to many of these samples is facilitated by the index to marine lacustrine geological samples coring began as a method of sampling surroundings of ore deposits and oil exploration it soon expanded to oceans lakes ice mud soil and wood cores on very old trees give information about their growth rings without destroying the tree cores indicate variations of climate species and sedimentary composition during geologic history the dynamic phenomena of the earths surface are for the most part cyclical in a number of ways especially temperature'</li></ul> |
| 0 | <ul><li>'##m and henry developed the analogy between electricity and acoustics the twentieth century saw a burgeoning of technological applications of the large body of scientific knowledge that was by then in place the first such application was sabines groundbreaking work in architectural acoustics and many others followed underwater acoustics was used for detecting submarines in the first world war sound recording and the telephone played important roles in a global transformation of society sound measurement and analysis reached new levels of accuracy and sophistication through the use of electronics and computing the ultrasonic frequency range enabled wholly new kinds of application in medicine and industry new kinds of transducers generators and receivers of acoustic energy were invented and put to use acoustics is defined by ansiasa s112013 as a science of sound including its production transmission and effects including biological and psychological effects b those qualities of a room that together determine its character with respect to auditory effects the study of acoustics revolves around the generation propagation and reception of mechanical waves and vibrations the steps shown in the above diagram can be found in any acoustical event or process there are many kinds of cause both natural and volitional there are many kinds of transduction process that convert energy from some other form into sonic energy producing a sound wave there is one fundamental equation that describes sound wave propagation the acoustic wave equation but the phenomena that emerge from it are varied and often complex the wave carries energy throughout the propagating medium eventually this energy is transduced again into other forms in ways that again may be natural andor volitionally contrived the final effect may be purely physical or it may reach far into the biological or volitional domains the five basic steps are found equally well whether we are talking about an earthquake a submarine using sonar to locate its foe or a band playing in a rock concert the central stage in the acoustical process is wave propagation this falls within the domain of physical acoustics in fluids sound propagates primarily as a pressure wave in solids mechanical waves can take many forms including longitudinal waves transverse waves and surface waves acoustics looks first at the pressure levels and frequencies in the sound wave and how the wave interacts with the environment this interaction can be described as either a diffraction interference or a reflection or a mix of the three if several media are present a refraction can also occur transduction processes are also of special importance to acoustics in fluids such as air and water sound waves propagate as disturbances in the ambient pressure level while this disturbance is usually small it is still noticeable to the human ear the smallest sound that a person can hear'</li><li>'##mhzcdot textcmrightcdot ell textcmcdot textftextmhz attenuation is linearly dependent on the medium length and attenuation coefficient as well as – approximately – the frequency of the incident ultrasound beam for biological tissue while for simpler media such as air the relationship is quadratic attenuation coefficients vary widely for different media in biomedical ultrasound imaging however biological materials and water are the most commonly used media the attenuation coefficients of common biological materials at a frequency of 1 mhz are listed below there are two general ways of acoustic energy losses absorption and scattering ultrasound propagation through homogeneous media is associated only with absorption and can be characterized with absorption coefficient only propagation through heterogeneous media requires taking into account scattering shortwave radiation emitted from the sun have wavelengths in the visible spectrum of light that range from 360 nm violet to 750 nm red when the suns radiation reaches the sea surface the shortwave radiation is attenuated by the water and the intensity of light decreases exponentially with water depth the intensity of light at depth can be calculated using the beerlambert law in clear midocean waters visible light is absorbed most strongly at the longest wavelengths thus red orange and yellow wavelengths are totally absorbed at shallower depths while blue and violet wavelengths reach deeper in the water column because the blue and violet wavelengths are absorbed least compared to the other wavelengths openocean waters appear deep blue to the eye near the shore coastal water contains more phytoplankton than the very clear midocean waters chlorophylla pigments in the phytoplankton absorb light and the plants themselves scatter light making coastal waters less clear than midocean waters chlorophylla absorbs light most strongly in the shortest wavelengths blue and violet of the visible spectrum in coastal waters where high concentrations of phytoplankton occur the green wavelength reaches the deepest in the water column and the color of water appears bluegreen or green the energy with which an earthquake affects a location depends on the running distance the attenuation in the signal of ground motion intensity plays an important role in the assessment of possible strong groundshaking a seismic wave loses energy as it propagates through the earth seismic attenuation this phenomenon is tied into the dispersion of the seismic energy with the distance there are two types of dissipated energy geometric dispersion caused by distribution of the seismic energy to greater volumes dispersion as heat also called intrinsic attenuation or anelastic attenuationin porous fluid — saturated sedimentary'</li><li>'in acoustics acoustic attenuation is a measure of the energy loss of sound propagation through an acoustic transmission medium most media have viscosity and are therefore not ideal media when sound propagates in such media there is always thermal consumption of energy caused by viscosity this effect can be quantified through the stokess law of sound attenuation sound attenuation may also be a result of heat conductivity in the media as has been shown by g kirchhoff in 1868 the stokeskirchhoff attenuation formula takes into account both viscosity and thermal conductivity effects for heterogeneous media besides media viscosity acoustic scattering is another main reason for removal of acoustic energy acoustic attenuation in a lossy medium plays an important role in many scientific researches and engineering fields such as medical ultrasonography vibration and noise reduction many experimental and field measurements show that the acoustic attenuation coefficient of a wide range of viscoelastic materials such as soft tissue polymers soil and porous rock can be expressed as the following power law with respect to frequency p x δ x p x e − α ω δ x α ω α 0 ω η displaystyle pxdelta xpxealpha omega delta xalpha omega alpha 0omega eta where ω displaystyle omega is the angular frequency p the pressure δ x displaystyle delta x the wave propagation distance α ω displaystyle alpha omega the attenuation coefficient and α 0 displaystyle alpha 0 and the frequencydependent exponent η displaystyle eta are real nonnegative material parameters obtained by fitting experimental data the value of η displaystyle eta ranges from 0 to 4 acoustic attenuation in water is frequencysquared dependent namely η 2 displaystyle eta 2 acoustic attenuation in many metals and crystalline materials is frequencyindependent namely η 1 displaystyle eta 1 in contrast it is widely noted that the η displaystyle eta of viscoelastic materials is between 0 and 2 for example the exponent η displaystyle eta of sediment soil and rock is about 1 and the exponent η displaystyle eta of most soft tissues is between 1 and 2the classical dissipative acoustic wave propagation equations are confined to the frequencyindependent and frequencysquared dependent attenuation such as the damped wave equation and the approximate thermoviscous wave equation in recent decades increasing attention and efforts have been focused on developing accurate models to describe general power law frequencydependent acoustic attenuation most of these recent frequencydependent models are established via'</li></ul> |
| 15 | <ul><li>'native species including the allen cays rock iguana and audubons shearwater since 2008 island conservation and the us fish and wildlife service usfws have worked together to remove invasive vertebrates from desecheo national wildlife refuge in puerto rico primarily benefiting the higo chumbo cactus three endemic reptiles two endemic invertebrates and to recover globally significant seabird colonies of brown boobies red footed boobies and bridled terns future work will focus on important seabird populations key reptile groups including west indian rock iguanas and the restoration of mona island alto velo and offshore cays in the puerto rican bank and the bahamas key partnerships include the usfws puerto rico dner the bahamas national trust and the dominican republic ministry of environment and natural resources in this region island conservation works primarily in ecuador and chile in ecuador the rabida island restoration project was completed in 2010 a gecko phyllodactylus sp found during monitoring in late 2012 was only recorded from subfossils estimated at more than 5700 years old live rabida island endemic land snails bulimulus naesiotus rabidensis not seen since collected over 100 years ago were also collected in late 2012 this was followed in 2012 by the pinzon and plaza sur island restoration project primarily benefiting the pinzon giant tortoise opuntia galapageia galapagos land iguana as a result of the project pinzon giant tortoise hatched from eggs and were surviving in the wild for the first time in more than 150 years in 2019 the directorate of galapagos national park with island conservation used drones to eradicate invasive rats from north seymour island this was the first time such an approach has been used on vertebrates in the wild the expectation is that this innovation will pave the way for cheaper invasive species eradications in the future on small and midsized islands the current focus in ecuador is floreana island with 55 iucn threatened species present and 13 extirpated species that could be reintroduced after invasive mammals are eradicated partners include the leona m and harry b helmsley charitable trust ministry of environment galapagos national park directorate galapagos biosecurity agency the ministry of agriculture the floreana parish council and the galapagos government council in 2009 chile island conservation initiated formal collaborations with conaf the countrys protected areas agency to further restoration of islands under their administration in january 2014 the choros island restoration project was completed benefiting the humboldt penguin peruvian diving petrel and the local ecotourism'</li><li>'ligase or chloroform extraction of dna may be necessary for electroporation alternatively only use a tenth of the ligation mixture to reduce the amount of contaminants normal preparation of competent cells can yield transformation efficiency ranging from 106 to 108 cfuμg dna protocols for chemical method however exist for making super competent cells that may yield a transformation efficiency of over 1 x 109damage to dna – exposure of dna to uv radiation in standard preparative agarose gel electrophoresis procedure for as little as 45 seconds can damage the dna and this can significantly reduce the transformation efficiency adding cytidine or guanosine to the electrophoresis buffer at 1 mm concentration however may protect the dna from damage a higherwavelength uv radiation 365 nm which cause less damage to dna should be used if it is necessary work for work on the dna on a uv transilluminator for an extended period of time this longer wavelength uv produces weaker fluorescence with the ethidium bromide intercalated into the dna therefore if it is necessary to capture images of the dna bands a shorter wavelength 302 or 312 nm uv radiations may be used such exposure however should be limited to a very short time if the dna is to be recovered later for ligation and transformation the method used for introducing the dna have a significant impact on the transformation efficiency electroporation tends to be more efficient than chemical methods and can be applied to a wide range of species and to strains that were previously resistant and recalcitrant to transformation techniqueselectroporation has been found to have an average yield typically between 104 108 cfuug however a transformation efficiencies as high as 055 x 1010 colony forming units cfu per microgram of dna for e coli for samples that are hard to handle like cdna libraries gdna and plasmids larger than 30 kb it is suggested to use electrocompetent cells that have transformation efficiencies of over 1 x 1010 cfuµg this will ensure a high success rate in introducing the dna and forming a large number of colonies it is important to adjust and optimize the electroporation buffer increasing the concentration of the electroporation buffer can result in increased transformation efficiencies and the shape strength number and number of pulses these electrical parameters play a key role in transformation efficiency chemical transformation or heat shock can be performed in a simple laboratory setup typically yielding transformation efficiencies that are adequate for cloning and subcloning applications approximately 106 cfuµ'</li><li>'at least one gene that affects isolation such that substituting one chromosome from a line of low isolation with another of high isolation reduces the hybridization frequency in addition interactions between chromosomes are detected so that certain combinations of the chromosomes have a multiplying effect cross incompatibility or incongruence in plants is also determined by major genes that are not associated at the selfincompatibility s locus reproductive isolation between species appears in certain cases a long time after fertilization and the formation of the zygote as happens – for example – in the twin species drosophila pavani and d gaucha the hybrids between both species are not sterile in the sense that they produce viable gametes ovules and spermatozoa however they cannot produce offspring as the sperm of the hybrid male do not survive in the semen receptors of the females be they hybrids or from the parent lines in the same way the sperm of the males of the two parent species do not survive in the reproductive tract of the hybrid female this type of postcopulatory isolation appears as the most efficient system for maintaining reproductive isolation in many speciesthe development of a zygote into an adult is a complex and delicate process of interactions between genes and the environment that must be carried out precisely and if there is any alteration in the usual process caused by the absence of a necessary gene or the presence of a different one it can arrest the normal development causing the nonviability of the hybrid or its sterility it should be borne in mind that half of the chromosomes and genes of a hybrid are from one species and the other half come from the other if the two species are genetically different there is little possibility that the genes from both will act harmoniously in the hybrid from this perspective only a few genes would be required in order to bring about post copulatory isolation as opposed to the situation described previously for precopulatory isolationin many species where precopulatory reproductive isolation does not exist hybrids are produced but they are of only one sex this is the case for the hybridization between females of drosophila simulans and drosophila melanogaster males the hybridized females die early in their development so that only males are seen among the offspring however populations of d simulans have been recorded with genes that permit the development of adult hybrid females that is the viability of the females is rescued it is assumed that the normal activity of these speciation genes is to inhibit the expression of the genes that allow the growth of the hybrid there'</li></ul> |
| 29 | <ul><li>'##gat rises and pressure differences force the saline water from the north sea through the narrow danish straits into the baltic sea throughout the entire inflow process the baltic seas water level rises on average by about 59 cm with 38 cm occurring during the preparatory period and 21 cm during the actual saline inflow the mbi itself typically lasts for 7 – 8 days the formation of an mbi requires specific relatively rare weather conditions between 1897 and 1976 approximately 90 mbis were observed averaging about one per year occasionally there are even multiyear periods without any mbis occurring large inflows that effectively renew the deep basin waters occur on average only once every ten yearsvery large mbis have occurred in 1897 330 km3 1906 300 km3 1922 510 km3 1951 510 km3 199394 300 km3 and 20142015 300 km3 large mbis have on the other hand been observed in 1898 twice 1900 1902 twice 1914 1921 1925 1926 1960 1965 1969 1973 1976 and 2003 the mbi that started in 2014 was by far the third largest mbi in the baltic sea only the inflows of 1951 and 19211922 were larger than itpreviously it was believed that there had been a genuine decline in the number of mbis after 1980 but recent studies have changed our understanding of the occurrence of saline inflows especially after the lightship gedser rev discontinued regular salinity measurements in the belt sea in 1976 the picture of the inflows based on salinity measurements remained incomplete at the leibniz institute for baltic sea research warnemunde germany an updated time series has been compiled filling in the gaps in observations and covering major baltic inflows and various smaller inflow events of saline water from around 1890 to the present day the updated time series is based on direct discharge data from the darss sill and no longer shows a clear change in the frequency or intensity of saline inflows instead there is cyclical variation in the intensity of mbis at approximately 30year intervals major baltic inflows mbis are the only natural phenomenon capable of oxygenating the deep saline waters of the baltic sea making their occurrence crucial for the ecological state of the sea the salinity and oxygen from mbis significantly impact the baltic seas ecosystems including the reproductive conditions of marine fish species such as cod the distribution of freshwater and marine species and the overall biodiversity of the baltic seathe heavy saline water brought in by mbis slowly advances along the seabed of the baltic proper at a pace of a few kilometers per day displacing the deep water from one basin to another'</li><li>'is measured in watts and is given by the solar constant times the crosssectional area of the earth corresponded to the radiation because the surface area of a sphere is four times the crosssectional area of a sphere ie the area of a circle the globally and yearly averaged toa flux is one quarter of the solar constant and so is approximately 340 watts per square meter wm2 since the absorption varies with location as well as with diurnal seasonal and annual variations the numbers quoted are multiyear averages obtained from multiple satellite measurementsof the 340 wm2 of solar radiation received by the earth an average of 77 wm2 is reflected back to space by clouds and the atmosphere and 23 wm2 is reflected by the surface albedo leaving 240 wm2 of solar energy input to the earths energy budget this amount is called the absorbed solar radiation asr it implies a value of about 03 for the mean net albedo of earth also called its bond albedo a a s r 1 − a × 340 w m − 2 [UNK] 240 w m − 2 displaystyle asr1atimes 340mathrm w mathrm m 2simeq 240mathrm w mathrm m 2 thermal energy leaves the planet in the form of outgoing longwave radiation olr longwave radiation is electromagnetic thermal radiation emitted by earths surface and atmosphere longwave radiation is in the infrared band but the terms are not synonymous as infrared radiation can be either shortwave or longwave sunlight contains significant amounts of shortwave infrared radiation a threshold wavelength of 4 microns is sometimes used to distinguish longwave and shortwave radiation generally absorbed solar energy is converted to different forms of heat energy some of the solar energy absorbed by the surface is converted to thermal radiation at wavelengths in the atmospheric window this radiation is able to pass through the atmosphere unimpeded and directly escape to space contributing to olr the remainder of absorbed solar energy is transported upwards through the atmosphere through a variety of heat transfer mechanisms until the atmosphere emits that energy as thermal energy which is able to escape to space again contributing to olr for example heat is transported into the atmosphere via evapotranspiration and latent heat fluxes or conductionconvection processes as well as via radiative heat transport ultimately all outgoing energy is radiated into space in the form of longwave radiation the transport of longwave radiation from earths surface through its multilayered atmosphere is governed by radiative transfer equations such as schwarzschilds equation for radiative transfer or more complex equations if scattering is present and'</li><li>'ions already in the ocean combine with some of the hydrogen ions to make further bicarbonate thus the oceans concentration of carbonate ions is reduced removing an essential building block for marine organisms to build shells or calcify ca2 co2−3 ⇌ caco3the increase in concentrations of dissolved carbon dioxide and bicarbonate and reduction in carbonate are shown in the bjerrum plot the saturation state known as ω of seawater for a mineral is a measure of the thermodynamic potential for the mineral to form or to dissolve and for calcium carbonate is described by the following equation ω ca 2 co 3 2 − k s p displaystyle omega frac leftce ca2rightleftce co32rightksp here ω is the product of the concentrations or activities of the reacting ions that form the mineral ca2 and co32− divided by the apparent solubility product at equilibrium ksp that is when the rates of precipitation and dissolution are equal in seawater dissolution boundary is formed as a result of temperature pressure and depth and is known as the saturation horizon above this saturation horizon ω has a value greater than 1 and caco3 does not readily dissolve most calcifying organisms live in such waters below this depth ω has a value less than 1 and caco3 will dissolve the carbonate compensation depth is the ocean depth at which carbonate dissolution balances the supply of carbonate to sea floor therefore sediment below this depth will be void of calcium carbonate increasing co2 levels and the resulting lower ph of seawater decreases the concentration of co32− and the saturation state of caco3 therefore increasing caco3 dissolution calcium carbonate most commonly occurs in two common polymorphs crystalline forms aragonite and calcite aragonite is much more soluble than calcite so the aragonite saturation horizon and aragonite compensation depth is always nearer to the surface than the calcite saturation horizon this also means that those organisms that produce aragonite may be more vulnerable to changes in ocean acidity than those that produce calcite ocean acidification and the resulting decrease in carbonate saturation states raise the saturation horizons of both forms closer to the surface this decrease in saturation state is one of the main factors leading to decreased calcification in marine organisms because the inorganic precipitation of caco3 is directly proportional to its saturation state and calcifying organisms exhibit stress in waters with lower saturation states already now large quantities of water undersaturated in aragonite are upwelling close to the pacific continental shelf area of north america from vancouver to northern'</li></ul> |
| 28 | <ul><li>'– 20 pdf acta univ apulensis pp 21 – 38 pdf acta univ apulensis matveev andrey o 2017 farey sequences duality and maps between subsequences berlin de de gruyter isbn 9783110546620 errata code'</li><li>'a000330 1 2 2 2 [UNK] n 2 1 3 b 0 n 3 3 b 1 n 2 3 b 2 n 1 1 3 n 3 3 2 n 2 1 2 n displaystyle 1222cdots n2frac 13b0n33b1n23b2n1tfrac 13leftn3tfrac 32n2tfrac 12nright some authors use the alternate convention for bernoulli numbers and state bernoullis formula in this way s m n 1 m 1 [UNK] k 0 m − 1 k m 1 k b k − n m 1 − k displaystyle smnfrac 1m1sum k0m1kbinom m1kbknm1k bernoullis formula is sometimes called faulhabers formula after johann faulhaber who also found remarkable ways to calculate sums of powers faulhabers formula was generalized by v guo and j zeng to a qanalog the bernoulli numbers appear in the taylor series expansion of many trigonometric functions and hyperbolic functions the bernoulli numbers appear in the following laurent seriesdigamma function ψ z ln z − [UNK] k 1 ∞ b k k z k displaystyle psi zln zsum k1infty frac bkkzk the kervaire – milnor formula for the order of the cyclic group of diffeomorphism classes of exotic 4n − 1spheres which bound parallelizable manifolds involves bernoulli numbers let esn be the number of such exotic spheres for n ≥ 2 then es n 2 2 n − 2 − 2 4 n − 3 numerator b 4 n 4 n displaystyle textit esn22n224n3operatorname numerator leftfrac b4n4nright the hirzebruch signature theorem for the l genus of a smooth oriented closed manifold of dimension 4n also involves bernoulli numbers the connection of the bernoulli number to various kinds of combinatorial numbers is based on the classical theory of finite differences and on the combinatorial interpretation of the bernoulli numbers as an instance of a fundamental combinatorial principle the inclusion – exclusion principle the definition to proceed with was developed by julius worpitzky in 1883 besides elementary arithmetic only the factorial function n and the power function km is employed the signless worpitzky numbers are defined as w n k [UNK] v 0 k − 1 v k v 1 n k v k − v displays'</li><li>'enough to know they exist and have certain properties using the pigeonhole principle thue and later siegel managed to prove the existence of auxiliary functions which for example took the value zero at many different points or took high order zeros at a smaller collection of points moreover they proved it was possible to construct such functions without making the functions too large their auxiliary functions were not explicit functions then but by knowing that a certain function with certain properties existed they used its properties to simplify the transcendence proofs of the nineteenth century and give several new resultsthis method was picked up on and used by several other mathematicians including alexander gelfond and theodor schneider who used it independently to prove the gelfond – schneider theorem alan baker also used the method in the 1960s for his work on linear forms in logarithms and ultimately bakers theorem another example of the use of this method from the 1960s is outlined below let β equal the cube root of ba in the equation ax3 bx3 c and assume m is an integer that satisfies m 1 2n3 ≥ m ≥ 3 where n is a positive integer then there exists f x y p x y ∗ q x displaystyle fxypxyqx such that [UNK] i 0 m n u i x i p x displaystyle sum i0mnuixipx [UNK] i 0 m n v i x i q x displaystyle sum i0mnvixiqx the auxiliary polynomial theorem states max 0 ≤ i ≤ m n u i v i ≤ 2 b 9 m n displaystyle max 0leq ileq mnuivileq 2b9mn in the 1960s serge lang proved a result using this nonexplicit form of auxiliary functions the theorem implies both the hermite – lindemann and gelfond – schneider theorems the theorem deals with a number field k and meromorphic functions f1fn of order at most ρ at least two of which are algebraically independent and such that if we differentiate any of these functions then the result is a polynomial in all of the functions under these hypotheses the theorem states that if there are m distinct complex numbers ω1ωm such that fi ωj is in k for all combinations of i and j then m is bounded by m ≤ 20 ρ k q displaystyle mleq 20rho kmathbb q to prove the result lang took two algebraically independent functions from f1fn say f and g and then created an auxiliary function which was simply a polynomial f in f and g this auxiliary function could'</li></ul> |
| 16 | <ul><li>'physiographic regions are a means of defining earths landforms into distinct mutually exclusive areas independent of political boundaries it is based upon the classic threetiered approach by nevin m fenneman in 1916 that separates landforms into physiographic divisions physiographic provinces and physiographic sectionsthe classification mechanism has become a popular geographical tool in the united states indicated by the publication of a usgs shapefile that maps the regions of the original work and the national park servicess use of the terminology to describe the regions in which its parks are locatedoriginally used in north america the model became the basis for similar classifications of other continents during the early 1900s the study of regionalscale geomorphology was termed physiography physiography later was considered to be a portmanteau of physical and geography and therefore synonymous with physical geography and the concept became embroiled in controversy surrounding the appropriate concerns of that discipline some geomorphologists held to a geological basis for physiography and emphasized a concept of physiographic regions while a conflicting trend among geographers was to equate physiography with pure morphology separated from its geological heritage in the period following world war ii the emergence of process climatic and quantitative studies led to a preference by many earth scientists for the term geomorphology in order to suggest an analytical approach to landscapes rather than a descriptive one in current usage physiography still lends itself to confusion as to which meaning is meant the more specialized geomorphological definition or the more encompassing physical geography definition for the purposes of physiographic mapping landforms are classified according to both their geologic structures and histories distinctions based on geologic age also correspond to physiographic distinctions where the forms are so recent as to be in their first erosion cycle as is generally the case with sheets of glacial drift generally forms which result from similar histories are characterized by certain similar features and differences in history result in corresponding differences of form usually resulting in distinctive features which are obvious to the casual observer but this is not always the case a maturely dissected plateau may grade without a break from rugged mountains on the one hand to mildly rolling farm lands on the other so also forms which are not classified together may be superficially similar for example a young coastal plain and a peneplain in a large number of cases the boundary lines are also geologic lines due to differences in the nature or structure of the underlying rocks the history of physiography itself is at best a complicated effort much of'</li><li>'tightly packed array of narrow individual beams provides very high angular resolution and accuracy in general a wide swath which is depth dependent allows a boat to map more seafloor in less time than a singlebeam echosounder by making fewer passes the beams update many times per second typically 01 – 50 hz depending on water depth allowing faster boat speed while maintaining 100 coverage of the seafloor attitude sensors allow for the correction of the boats roll and pitch on the ocean surface and a gyrocompass provides accurate heading information to correct for vessel yaw most modern mbes systems use an integrated motionsensor and position system that measures yaw as well as the other dynamics and position a boatmounted global positioning system gps or other global navigation satellite system gnss positions the soundings with respect to the surface of the earth sound speed profiles speed of sound in water as a function of depth of the water column correct for refraction or raybending of the sound waves owing to nonuniform water column characteristics such as temperature conductivity and pressure a computer system processes all the data correcting for all of the above factors as well as for the angle of each individual beam the resulting sounding measurements are then processed either manually semiautomatically or automatically in limited circumstances to produce a map of the area as of 2010 a number of different outputs are generated including a subset of the original measurements that satisfy some conditions eg most representative likely soundings shallowest in a region etc or integrated digital terrain models dtm eg a regular or irregular grid of points connected into a surface historically selection of measurements was more common in hydrographic applications while dtm construction was used for engineering surveys geology flow modeling etc since c 2003 – 2005 dtms have become more accepted in hydrographic practice satellites are also used to measure bathymetry satellite radar maps deepsea topography by detecting the subtle variations in sea level caused by the gravitational pull of undersea mountains ridges and other masses on average sea level is higher over mountains and ridges than over abyssal plains and trenchesin the united states the united states army corps of engineers performs or commissions most surveys of navigable inland waterways while the national oceanic and atmospheric administration noaa performs the same role for ocean waterways coastal bathymetry data is available from noaas national geophysical data center ngdc which is now merged into national centers for environmental information bathymetric data is usually referenced to tidal vertical datums for deepwater bathymetry this is typically mean sea level msl but most data used for nautical charting is referenced to mean lower low water mllw in'</li><li>'the term stream power law describes a semiempirical family of equations used to predict the rate of erosion of a river into its bed these combine equations describing conservation of water mass and momentum in streams with relations for channel hydraulic geometry widthdischarge scaling and basin hydrology dischargearea scaling and an assumed dependency of erosion rate on either unit stream power or shear stress on the bed to produce a simplified description of erosion rate as a function of power laws of upstream drainage area a and channel slope s e k a m s n displaystyle ekamsn where e is erosion rate and k m and n are positive the value of these parameters depends on the assumptions made but all forms of the law can be expressed in this basic form the parameters k m and n are not necessarily constant but rather may vary as functions of the assumed scaling laws erosion process bedrock erodibility climate sediment flux andor erosion threshold however observations of the hydraulic scaling of real rivers believed to be in erosional steady state indicate that the ratio mn should be around 05 which provides a basic test of the applicability of each formulationalthough consisting of the product of two power laws the term stream power law refers to the derivation of the early forms of the equation from assumptions of erosion dependency on stream power rather than to the presence of power laws in the equation this relation is not a true scientific law but rather a heuristic description of erosion processes based on previously observed scaling relations which may or may not be applicable in any given natural setting the stream power law is an example of a one dimensional advection equation more specifically a hyperbolic partial differential equation typically the equation is used to simulate propagating incision pulses creating discontinuities or knickpoints in the river profile commonly used first order finite difference methods to solve the stream power law may result in significant numerical diffusion which can be prevented by the use of analytical solutions or higher order numerical schemes'</li></ul> |
| 40 | <ul><li>'##regular open set is the set u 01 ∪ 12 in r with its normal topology since 1 is in the interior of the closure of u but not in u the regular open subsets of a space form a complete boolean algebra relatively compact a subset y of a space x is relatively compact in x if the closure of y in x is compact residual if x is a space and a is a subset of x then a is residual in x if the complement of a is meagre in x also called comeagre or comeager resolvable a topological space is called resolvable if it is expressible as the union of two disjoint dense subsets rimcompact a space is rimcompact if it has a base of open sets whose boundaries are compact sspace an sspace is a hereditarily separable space which is not hereditarily lindelofscattered a space x is scattered if every nonempty subset a of x contains a point isolated in ascott the scott topology on a poset is that in which the open sets are those upper sets inaccessible by directed joinssecond category see meagresecondcountable a space is secondcountable or perfectly separable if it has a countable base for its topology every secondcountable space is firstcountable separable and lindelofsemilocally simply connected a space x is semilocally simply connected if for every point x in x there is a neighbourhood u of x such that every loop at x in u is homotopic in x to the constant loop x every simply connected space and every locally simply connected space is semilocally simply connected compare with locally simply connected here the homotopy is allowed to live in x whereas in the definition of locally simply connected the homotopy must live in usemiopen a subset a of a topological space x is called semiopen if a ⊆ cl x int x a displaystyle asubseteq operatorname cl xleftoperatorname int xaright semipreopen a subset a of a topological space x is called semipreopen if a ⊆ cl x int x cl x a displaystyle asubseteq operatorname cl xleftoperatorname int xleftoperatorname cl xarightright semiregular a space is semiregular if the regular open sets form a baseseparable a space is separable if it has a countable dense subsetseparated two sets a and'</li><li>'not necessarily equivalent the most useful notion — and the standard definition of the unqualified term compactness — is phrased in terms of the existence of finite families of open sets that cover the space in the sense that each point of the space lies in some set contained in the family this more subtle notion introduced by pavel alexandrov and pavel urysohn in 1929 exhibits compact spaces as generalizations of finite sets in spaces that are compact in this sense it is often possible to patch together information that holds locally – that is in a neighborhood of each point – into corresponding statements that hold throughout the space and many theorems are of this character the term compact set is sometimes used as a synonym for compact space but also often refers to a compact subspace of a topological space in the 19th century several disparate mathematical properties were understood that would later be seen as consequences of compactness on the one hand bernard bolzano 1817 had been aware that any bounded sequence of points in the line or plane for instance has a subsequence that must eventually get arbitrarily close to some other point called a limit point bolzanos proof relied on the method of bisection the sequence was placed into an interval that was then divided into two equal parts and a part containing infinitely many terms of the sequence was selected the process could then be repeated by dividing the resulting smaller interval into smaller and smaller parts – until it closes down on the desired limit point the full significance of bolzanos theorem and its method of proof would not emerge until almost 50 years later when it was rediscovered by karl weierstrassin the 1880s it became clear that results similar to the bolzano – weierstrass theorem could be formulated for spaces of functions rather than just numbers or geometrical points the idea of regarding functions as themselves points of a generalized space dates back to the investigations of giulio ascoli and cesare arzela the culmination of their investigations the arzela – ascoli theorem was a generalization of the bolzano – weierstrass theorem to families of continuous functions the precise conclusion of which was that it was possible to extract a uniformly convergent sequence of functions from a suitable family of functions the uniform limit of this sequence then played precisely the same role as bolzanos limit point towards the beginning of the twentieth century results similar to that of arzela and ascoli began to accumulate in the area of integral equations as investigated by david hilbert and erhard schmidt for a certain class of greens functions coming from solutions'</li><li>'also holds for dmodules if x s x and s are smooth varieties but f and g need not be flat or proper etc there is a quasiisomorphism g † [UNK] f f → [UNK] f ′ g ′ † f displaystyle gdagger int fmathcal fto int fgdagger mathcal f where − † displaystyle dagger and [UNK] displaystyle int denote the inverse and direct image functors for dmodules for etale torsion sheaves f displaystyle mathcal f there are two base change results referred to as proper and smooth base change respectively base change holds if f x → s displaystyle fxrightarrow s is proper it also holds if g is smooth provided that f is quasicompact and provided that the torsion of f displaystyle mathcal f is prime to the characteristic of the residue fields of xclosely related to proper base change is the following fact the two theorems are usually proved simultaneously let x be a variety over a separably closed field and f displaystyle mathcal f a constructible sheaf on x et displaystyle xtextet then h r x f displaystyle hrxmathcal f are finite in each of the following cases x is complete or f displaystyle mathcal f has no ptorsion where p is the characteristic of kunder additional assumptions deninger 1988 extended the proper base change theorem to nontorsion etale sheaves in close analogy to the topological situation mentioned above the base change map for an open immersion f g ∗ f ∗ f → f ∗ ′ g ′ ∗ f displaystyle gfmathcal fto fgmathcal f is not usually an isomorphism instead the extension by zero functor f displaystyle f satisfies an isomorphism g ∗ f f → f ′ g ∗ f displaystyle gfmathcal fto fgmathcal f this fact and the proper base change suggest to define the direct image functor with compact support for a map f by r f r p ∗ j displaystyle rfrpj where f p ∘ j displaystyle fpcirc j is a compactification of f ie a factorization into an open immersion followed by a proper map the proper base change theorem is needed to show that this is welldefined ie independent up to isomorphism of the choice of the compactification moreover again in analogy to the case of sheaves on a topological space a base change formula for g ∗ displaystyle g vs r f displaystyle rf does hold for nonproper maps f for the'</li></ul> |
| 30 | <ul><li>'of mtor inhibitors for the treatment of cancer was not successful at that time since then rapamycin has also shown to be effective for preventing coronary artery restenosis and for the treatment of neurodegenerative diseases the development of rapamycin as an anticancer agent began again in the 1990s with the discovery of temsirolimus cci779 this novel soluble rapamycin derivative had a favorable toxicological profile in animals more rapamycin derivatives with improved pharmacokinetics and reduced immunosuppressive effects have since then been developed for the treatment of cancer these rapalogs include temsirolimus cci779 everolimus rad001 and ridaforolimus ap23573 which are being evaluated in cancer clinical trials rapamycin analogs have similar therapeutic effects as rapamycin however they have improved hydrophilicity and can be used for oral and intravenous administration in 2012 national cancer institute listed more than 200 clinical trials testing the anticancer activity of rapalogs both as monotherapy or as a part of combination therapy for many cancer typesrapalogs which are the first generation mtor inhibitors have proven effective in a range of preclinical models however the success in clinical trials is limited to only a few rare cancers animal and clinical studies show that rapalogs are primarily cytostatic and therefore effective as disease stabilizers rather than for regression the response rate in solid tumors where rapalogs have been used as a singleagent therapy have been modest due to partial mtor inhibition as mentioned before rapalogs are not sufficient for achieving a broad and robust anticancer effect at least when used as monotherapyanother reason for the limited success is that there is a feedback loop between mtorc1 and akt in certain tumor cells it seems that mtorc1 inhibition by rapalogs fails to repress a negative feedback loop that results in phosphorylation and activation of akt these limitations have led to the development of the second generation of mtor inhibitors rapamycin and rapalogs rapamycin derivatives are small molecule inhibitors which have been evaluated as anticancer agents the rapalogs have more favorable pharmacokinetic profile compared to rapamycin the parent drug despite the same binding sites for mtor and fkbp12 sirolimus the bacterial natural product rapamycin or sirolimus a cytostatic agent has been used in combination therapy with corticosteroids'</li><li>'is appropriate typically either a baseline survey or a design survey of functional areas both types of surveys are explained in detail under astm standard e 235604 typically a baseline survey is performed by an epa or state licensed asbestos inspector the baseline survey provides the buyer with sufficient information on presumed asbestos at the facility often which leads to reduction in the assessed value of the building due primarily to forthcoming abatement costs note epa neshap national emissions standards for hazardous air pollutants and osha occupational safety and health administration regulations must be consulted in addition to astm standard e 235604 to ensure all statutory requirements are satisfied ex notification requirements for renovationdemolition asbestos is not a material covered under cercla comprehensive environmental response compensation and liability act innocent purchaser defense in some instances the us epa includes asbestos contaminated facilities on the npl superfund buyers should be careful not to purchase facilities even with an astm e 152705 phase i esa completed without a full understanding of all the hazards in a building or at a property without evaluating nonscope astm e 152705 materials such as asbestos lead pcbs mercury radon et al a standard astm e 152705 does not include asbestos surveys as standard practice in 1988 the united states environmental protection agency usepa issued regulations requiring certain us companies to report the asbestos used in their productsa senate subcommittee of the health education labor and pensions committee heard testimony on july 31 2001 regarding the health effects of asbestos members of the public doctors and scientists called for the united states to join other countries in a ban on the productseveral legislative remedies have been considered by the us congress but each time rejected for a variety of reasons in 2005 congress considered but did not pass legislation entitled the fairness in asbestos injury resolution act of 2005 the act would have established a 140 billion trust fund in lieu of litigation but as it would have proactively taken funds held in reserve by bankruptcy trusts manufacturers and insurance companies it was not widely supported either by victims or corporations on april 26 2005 philip j landrigan professor and chair of the department of community and preventive medicine at mount sinai medical center in new york city testified before the us senate committee on the judiciary against this proposed legislation he testified that many of the bills provisions were unsupported by medicine and would unfairly exclude a large number of people who had become ill or died from asbestos the approach to the diagnosis of disease caused by asbestos that is set forth in this bill is not consistent with the diagnostic criteria established by the american thoracic society if the bill is to deliver on'</li><li>'cancer slope factors csf are used to estimate the risk of cancer associated with exposure to a carcinogenic or potentially carcinogenic substance a slope factor is an upper bound approximating a 95 confidence limit on the increased cancer risk from a lifetime exposure to an agent by ingestion or inhalation this estimate usually expressed in units of proportion of a population affected per mg of substancekg body weightday is generally reserved for use in the lowdose region of the doseresponse relationship that is for exposures corresponding to risks less than 1 in 100 slope factors are also referred to as cancer potency factors pf for carcinogens it is commonly assumed that a small number of molecular events may evoke changes in a single cell that can lead to uncontrolled cellular proliferation and eventually to a clinical diagnosis of cancer this toxicity of carcinogens is referred to as being nonthreshold because there is believed to be essentially no level of exposure that does not pose some probability of producing a carcinogenic response therefore there is no dose that can be considered to be riskfree however some nongenotoxic carcinogens may exhibit a threshold whereby doses lower than the threshold do not invoke a carcinogenic response when evaluating cancer risks of genotoxic carcinogens theoretically an effect threshold cannot be estimated for chemicals that are carcinogens a twopart evaluation to quantify risk is often employed in which the substance first is assigned a weightofevidence classification and then a slope factor is calculated when the chemical is a known or probable human carcinogen a toxicity value that defines quantitatively the relationship between dose and response ie the slope factor is calculated because risk at low exposure levels is difficult to measure directly either by animal experiments or by epidemiologic studies the development of a slope factor generally entails applying a model to the available data set and using the model to extrapolate from the relatively high doses administered to experimental animals or the exposures noted in epidemiologic studies to the lower exposure levels expected for human contact in the environment highquality human data eg high quality epidemiological studies on carcinogens are preferable to animal data when human data are limited the most sensitive species is given the greatest emphasis occasionally in situations where no single study is judged most appropriate yet several studies collectively support the estimate the geometric mean of estimates from all studies may be adopted as the slope this practice ensures the inclusion of all relevant data slope factors are typically calculated for potential carcinogens in classes a b1'</li></ul> |
| 10 | <ul><li>'standards for reporting enzymology data strenda is an initiative as part of the minimum information standards which specifically focuses on the development of guidelines for reporting describing metadata enzymology experiments the initiative is supported by the beilstein institute for the advancement of chemical sciences strenda establishes both publication standards for enzyme activity data and strenda db an electronic validation and storage system for enzyme activity data launched in 2004 the foundation of strenda is the result of a detailed analysis of the quality of enzymology data in written and electronic publications the strenda project is driven by 15 scientists from all over the world forming the strenda commission and supporting the work with expertises in biochemistry enzyme nomenclature bioinformatics systems biology modelling mechanistic enzymology and theoretical biology the strenda guidelines propose those minimum information that is needed to comprehensively report kinetic and equilibrium data from investigations of enzyme activities including corresponding experimental conditions this minimum information is suggested to be addressed in a scientific publication when enzymology research data is reported to ensure that data sets are comprehensively described this allows scientists not only to review interpret and corroborate the data but also to reuse the data for modelling and simulation of biocatalytic pathways in addition the guidelines support researchers making their experimental data reproducible and transparentas of march 2020 more than 55 international biochemistry journal included the strenda guidelines in their authors instructions as recommendations when reporting enzymology data the strenda project is registered with fairsharingorg and the guidelines are part of the fairdom community standards for systems biology strenda db strenda db is a webbased storage and search platform that has incorporated the guidelines and automatically checks the submitted data on compliance with the strenda guidelines thus ensuring that the manuscript data sets are complete and valid a valid data set is awarded a strenda registry number srn and a fact sheet pdf is created containing all submitted data each dataset is registered at datacite and assigned a doi to refer and track the data after the publication of the manuscript in a peerreviewed journal the data in strenda db are made open accessible strenda db is a repository recommended by re3data and opendoar it is harvested by openaire the database service is recommended in the authors instructions of more than 10 biochemistry journals including nature the journal of biological chemistry elife and plos it has been referred as a standard tool for the validation and storage of enzyme kinetics data in multifold publications a recent study examining eleven publications including supporting information from two leading journals'</li><li>'an endergonic reaction is an anabolic chemical reaction that consumes energy it is the opposite of an exergonic reaction it has a positive δg because it takes more energy to break the bonds of the reactant than the energy of the products offer ie the products have weaker bonds than the reactants thus endergonic reactions are thermodynamically unfavorable additionally endergonic reactions are usually anabolicthe free energy δg gained or lost in a reaction can be calculated as follows δg δh − tδs where ∆g gibbs free energy ∆h enthalpy t temperature in kelvins and ∆s entropy glycolysis is the process of breaking down glucose into pyruvate producing two molecules of atp per 1 molecule of glucose in the process when a cell has a higher concentration of atp than adp ie has a high energy charge the cell cant undergo glycolysis releasing energy from available glucose to perform biological work pyruvate is one product of glycolysis and can be shuttled into other metabolic pathways gluconeogenesis etc as needed by the cell additionally glycolysis produces reducing equivalents in the form of nadh nicotinamide adenine dinucleotide which will ultimately be used to donate electrons to the electron transport chaingluconeogenesis is the opposite of glycolysis when the cells energy charge is low the concentration of adp is higher than that of atp the cell must synthesize glucose from carbon containing biomolecules such as proteins amino acids fats pyruvate etc for example proteins can be broken down into amino acids and these simpler carbon skeletons are used to build synthesize glucosethe citric acid cycle is a process of cellular respiration in which acetyl coenzyme a synthesized from pyruvate dehydrogenase is first reacted with oxaloacetate to yield citrate the remaining eight reactions produce other carboncontaining metabolites these metabolites are successively oxidized and the free energy of oxidation is conserved in the form of the reduced coenzymes fadh2 and nadh these reduced electron carriers can then be reoxidized when they transfer electrons to the electron transport chainketosis is a metabolic process whereby ketone bodies are used by the cell for energy instead of using glucose cells often turn to ketosis as a source of energy when glucose levels are low eg during starvationoxidative phosphorylation and the electron transport'</li><li>'the thanatotranscriptome denotes all rna transcripts produced from the portions of the genome still active or awakened in the internal organs of a body following its death it is relevant to the study of the biochemistry microbiology and biophysics of thanatology in particular within forensic science some genes may continue to be expressed in cells for up to 48 hours after death producing new mrna certain genes that are generally inhibited since the end of fetal development may be expressed again at this time clues to the existence of a postmortem transcriptome existed at least since the beginning of the 21st century but the word thanatotranscriptome from thanatos greek for death seems to have been first used in the scientific literature by javan et al in 2015 following the introduction of the concept of the human thanatomicrobiome in 2014 at the 66th annual meeting of the american academy of forensic sciences in seattle washington in 2016 researchers at the university of washington confirmed that up to 2 days 48 hours after the death of mice and zebrafish many genes still functioned changes in the quantities of mrna in the bodies of the dead animals proved that hundreds of genes with very different functions awoke just after death the researchers detected 548 genes that awoke after death in zebrafish and 515 in laboratory mice among these were genes involved in development of the organism including genes that are normally activated only in utero or in ovo in the egg during fetal development the thanatomicrobiome is characterized by a diverse assortment of microorganisms located in internal organs brain heart liver and spleen and blood samples collected after a human dies it is defined as the microbial community of internal body sites created by a successional process whereby trillions of microorganisms populate proliferate andor die within the dead body resulting in temporal modifications in the community composition over time characterization and quantification of the transcriptome in a given dead tissue can identify genetic assets which can be used to determine the regulatory mechanisms and set networks of gene expression the techniques commonly used for simultaneously measuring the concentration of a large number of different types of mrna include microarrays and highthroughput sequencing via rnaseq analysis from a serology postmortem can characterize the transcriptome of a particular tissue cell type or compare the transcriptomes between various experimental conditions such analysis can be complementary to the analysis of thanatomicrobiome to better understand the process of transformation of the necromass in the hours and days following death future applications of this information could include constructing a more'</li></ul> |
| 37 | <ul><li>'door being closed there is no opposition in this predicate 1b and 1c both have predicates showing transitions of the door going from being implicitly open to closed 1b gives the intransitive use of the verb close with no explicit mention of the causer but 1c makes explicit mention of the agent involved in the action the analysis of these different lexical units had a decisive role in the field of generative linguistics during the 1960s the term generative was proposed by noam chomsky in his book syntactic structures published in 1957 the term generative linguistics was based on chomskys generative grammar a linguistic theory that states systematic sets of rules x theory can predict grammatical phrases within a natural language generative linguistics is also known as governmentbinding theory generative linguists of the 1960s including noam chomsky and ernst von glasersfeld believed semantic relations between transitive verbs and intransitive verbs were tied to their independent syntactic organization this meant that they saw a simple verb phrase as encompassing a more complex syntactic structure lexicalist theories became popular during the 1980s and emphasized that a words internal structure was a question of morphology and not of syntax lexicalist theories emphasized that complex words resulting from compounding and derivation of affixes have lexical entries that are derived from morphology rather than resulting from overlapping syntactic and phonological properties as generative linguistics predicts the distinction between generative linguistics and lexicalist theories can be illustrated by considering the transformation of the word destroy to destruction generative linguistics theory states the transformation of destroy → destruction as the nominal nom destroy combined with phonological rules that produce the output destruction views this transformation as independent of the morphology lexicalist theory sees destroy and destruction as having idiosyncratic lexical entries based on their differences in morphology argues that each morpheme contributes specific meaning states that the formation of the complex word destruction is accounted for by a set of lexical rules which are different and independent from syntactic rulesa lexical entry lists the basic properties of either the whole word or the individual properties of the morphemes that make up the word itself the properties of lexical items include their category selection cselection selectional properties sselection also known as semantic selection phonological properties and features the properties of lexical items are idiosyncratic unpredictable and contain specific information about the lexical items that they describethe following is an example of a lexical entry for the verb put lexicalist theories state that a words meaning is'</li><li>'de se is latin for of oneself and in philosophy it is a phrase used to delineate what some consider a category of ascription distinct from de dicto and de re such ascriptions are found with propositional attitudes mental states an agent holds toward a proposition such de se ascriptions occur when an agent holds a mental state towards a proposition about themselves knowing that this proposition is about themselves a sentence such as peter thinks that he is pale where the pronoun he is meant to refer to peter is ambiguous in a way not captured by the de dicto de re distinction such a sentence could report that peter has the following thought i am pale or peter could have the following thought he is pale where it so happens that the pronoun he refers to peter but peter is unaware of it the first meaning expresses a belief de se while the second does not this notion is extensively discussed in the philosophical literature as well as in the theoretical linguistic literature the latter because some linguistic phenomena clearly are sensitive to this notion david lewiss 1979 article attitudes de dicto and de se gave full birth to the topic and his expression of it draws heavily on his distinctive theory of possible worlds but modern discussions on this topic originate with hectorneri castanedas discovery of what he called quasi indexicals or “ quasiindicators ” according to castaneda the speaker of the sentence “ mary believes that she herself is the winner ” uses the quasiindicator “ she herself ” often written “ she∗ ” to express marys firstperson reference to herself ie to mary that sentence would be the speakers way of depicting the proposition that mary would unambiguously express in the first person by “ i am the winner ” a clearer case can be illustrated simply imagine the following scenario peter who is running for office is drunk he is watching an interview of a candidate on tv not realizing that this candidate is himself liking what he hears he says i hope this candidate gets elected having witnessed this one can truthfully report peters hopes by uttering peter hopes that he will get elected where he refers to peter since this candidate indeed refers to peter however one could not report peters hopes by saying peter hopes to get elected this last sentence is only appropriate if peter had a de se hope that is a hope in the first person as if he had said i hope i get elected which is not the case here the study of the notion of belief de se thus includes that of quasiindexicals the linguistic theory of logophoricity and logophoric pronouns and the linguistic and literary'</li><li>'##mal ie near or closer to the speaker and distal ie far from the speaker andor closer to the addressee english exemplifies this with such pairs as this and that here and there etc in other languages the distinction is threeway or higher proximal ie near the speaker medial ie near the addressee and distal ie far from both this is the case in a few romance languages and in serbocroatian korean japanese thai filipino macedonian yaqui and turkish the archaic english forms yon and yonder still preserved in some regional dialects once represented a distal category that has now been subsumed by the formerly medial there in the sinhala language there is a fourway deixis system for both person and place near the speaker meː near the addressee oː close to a third person visible arəː and far from all not visible eː the malagasy language has seven degrees of distance combined with two degrees of visibility while many inuit languages have even more complex systems temporal deixis temporal deixis or time deixis concerns itself with the various times involved in and referred to in an utterance this includes time adverbs like now then and soon as well as different verbal tenses a further example is the word tomorrow which denotes the next consecutive day after any day it is used tomorrow when spoken on a day last year denoted a different day from tomorrow when spoken next week time adverbs can be relative to the time when an utterance is made what fillmore calls the encoding time or et or the time when the utterance is heard fillmores decoding time or dt although these are frequently the same time they can differ as in the case of prerecorded broadcasts or correspondence for example if one were to write temporal deictical terms are in italics it is raining now but i hope when you read this it will be sunnythe et and dt would be different with now referring to the moment the sentence is written and when referring to the moment the sentence is read tenses are generally separated into absolute deictic and relative tenses so for example simple english past tense is absolute such as in he wentwhereas the pluperfect is relative to some other deictically specified time as in he had gone though the traditional categories of deixis are perhaps the most obvious there are other types of deixis that are similarly pervasive in language use these categories of deixis were first discussed by fillmore and lyons and were echoed in works of others discourse deixis discourse deixis also referred'</li></ul> |
| 4 | <ul><li>'t fractional calculus fractionalorder system multifractal system'</li><li>'singleparticle trajectories spts consist of a collection of successive discrete points causal in time these trajectories are acquired from images in experimental data in the context of cell biology the trajectories are obtained by the transient activation by a laser of small dyes attached to a moving molecule molecules can now by visualized based on recent superresolution microscopy which allow routine collections of thousands of short and long trajectories these trajectories explore part of a cell either on the membrane or in 3 dimensions and their paths are critically influenced by the local crowded organization and molecular interaction inside the cell as emphasized in various cell types such as neuronal cells astrocytes immune cells and many others spt allowed observing moving particles these trajectories are used to investigate cytoplasm or membrane organization but also the cell nucleus dynamics remodeler dynamics or mrna production due to the constant improvement of the instrumentation the spatial resolution is continuously decreasing reaching now values of approximately 20 nm while the acquisition time step is usually in the range of 10 to 50 ms to capture short events occurring in live tissues a variant of superresolution microscopy called sptpalm is used to detect the local and dynamically changing organization of molecules in cells or events of dna binding by transcription factors in mammalian nucleus superresolution image acquisition and particle tracking are crucial to guarantee a high quality data once points are acquired the next step is to reconstruct a trajectory this step is done known tracking algorithms to connect the acquired points tracking algorithms are based on a physical model of trajectories perturbed by an additive random noise the redundancy of many short spts is a key feature to extract biophysical information parameters from empirical data at a molecular level in contrast long isolated trajectories have been used to extract information along trajectories destroying the natural spatial heterogeneity associated to the various positions the main statistical tool is to compute the meansquare displacement msd or second order statistical moment ⟨ x t δ t − x t 2 ⟩ [UNK] t α displaystyle langle xtdelta txt2rangle sim talpha average over realizations where α displaystyle alpha is the called the anomalous exponentfor a brownian motion ⟨ x t δ t − x t 2 ⟩ 2 n d t displaystyle langle xtdelta txt2rangle 2ndt where d is the diffusion coefficient n is dimension of the space some other properties can also be recovered from long trajectories such as the'</li><li>'displaystyle k party communication complexity c a k f displaystyle cakf of a function f displaystyle f with respect to partition a displaystyle a is the minimum of costs of those k displaystyle k party protocols which compute f displaystyle f the k displaystyle k party symmetric communication complexity of f displaystyle f is defined as c k f max a c a k f displaystyle ckfmax acakf where the maximum is taken over all kpartitions of set x x 1 x 2 x n displaystyle xx1x2xn for a general upper bound both for two and more players let us suppose that a1 is one of the smallest classes of the partition a1a2ak then p1 can compute any boolean function of s with a1 1 bits of communication p2 writes down the a1 bits of a1 on the blackboard p1 reads it and computes and announces the value f x displaystyle fx so the following can be written c k f ≤ [UNK] n k [UNK] 1 displaystyle ckfleq bigg lfloor n over kbigg rfloor 1 the generalized inner product function gip is defined as follows let y 1 y 2 y k displaystyle y1y2yk be n displaystyle n bit vectors and let y displaystyle y be the n displaystyle n times k displaystyle k matrix with k displaystyle k columns as the y 1 y 2 y k displaystyle y1y2yk vectors then g i p y 1 y 2 y k displaystyle gipy1y2yk is the number of the all1 rows of matrix y displaystyle y taken modulo 2 in other words if the vectors y 1 y 2 y k displaystyle y1y2yk correspond to the characteristic vectors of k displaystyle k subsets of an n displaystyle n element baseset then gip corresponds to the parity of the intersection of these k displaystyle k subsets it was shown that c k g i p ≥ c n 4 k displaystyle ckgipgeq cn over 4k with a constant c 0 an upper bound on the multiparty communication complexity of gip shows that c k g i p ≤ c n 2 k displaystyle ckgipleq cn over 2k with a constant c 0 for a general boolean function f one can bound the multiparty communication complexity of f by using its l1 norm as follows c k f o k 2 log n l 1 f [UNK] n l 1 2 f 2 k [UNK] displaystyle ckfobigg k2log'</li></ul> |
| 26 | <ul><li>'in physical chemistry and materials science texture is the distribution of crystallographic orientations of a polycrystalline sample it is also part of the geological fabric a sample in which these orientations are fully random is said to have no distinct texture if the crystallographic orientations are not random but have some preferred orientation then the sample has a weak moderate or strong texture the degree is dependent on the percentage of crystals having the preferred orientation texture is seen in almost all engineered materials and can have a great influence on materials properties the texture forms in materials during thermomechanical processes for example during production processes eg rolling consequently the rolling process is often followed by a heat treatment to reduce the amount of unwanted texture controlling the production process in combination with the characterization of texture and the materials microstructure help to determine the materials properties ie the processingmicrostructuretextureproperty relationship also geologic rocks show texture due to their thermomechanic history of formation processes one extreme case is a complete lack of texture a solid with perfectly random crystallite orientation will have isotropic properties at length scales sufficiently larger than the size of the crystallites the opposite extreme is a perfect single crystal which likely has anisotropic properties by geometric necessity texture can be determined by various methods some methods allow a quantitative analysis of the texture while others are only qualitative among the quantitative techniques the most widely used is xray diffraction using texture goniometers followed by the electron backscatter diffraction ebsd method in scanning electron microscopes qualitative analysis can be done by laue photography simple xray diffraction or with a polarized microscope neutron and synchrotron highenergy xray diffraction are suitable for determining textures of bulk materials and in situ analysis whereas laboratory xray diffraction instruments are more appropriate for analyzing textures of thin films texture is often represented using a pole figure in which a specified crystallographic axis or pole from each of a representative number of crystallites is plotted in a stereographic projection along with directions relevant to the materials processing history these directions define the socalled sample reference frame and are because the investigation of textures started from the cold working of metals usually referred to as the rolling direction rd the transverse direction td and the normal direction nd for drawn metal wires the cylindrical fiber axis turned out as the sample direction around which preferred orientation is typically observed see below there are several textures that are commonly found in processed cubic materials they are named either by the scientist that discovered them or by'</li><li>'are specified according to several standards the most common standard in europe is iso 94541 also known as din en 294541this standard specifies each flux by a fourcharacter code flux type base activator and form the form is often omitted therefore 112 means rosin flux with halides the older german din 8511 specification is still often in use in shops in the table below note that the correspondence between din 8511 and iso 94541 codes is not onetoone one standard increasingly used eg in the united states is jstd004 it is very similar to din en 6119011 four characters two letters then one letter and last a number represent flux composition flux activity and whether activators include halides first two letters base ro rosin re resin or organic in inorganic third letter activity l low m moderate h high number halide content 0 less than 005 in weight “ halidefree ” 1 halide content depends on activity less than 05 for low activity 05 to 20 for moderate activity greater than 20 for high activityany combination is possible eg rol0 rem1 or orh0 jstd004 characterizes the flux by reliability of residue from a surface insulation resistance sir and electromigration standpoint it includes tests for electromigration and surface insulation resistance which must be greater than 100 mω after 168 hours at elevated temperature and humidity with a dc bias applied the old milf14256 and qqs571 standards defined fluxes as r rosin rma rosin mildly activated ra rosin activated ws watersolubleany of these categories may be noclean or not depending on the chemistry selected and the standard that the manufacturer requires fluxcored arc welding gas metal arc welding shielded metal arc welding'</li><li>'are very soft and ductile the resulting aluminium alloy will have much greater strength adding a small amount of nonmetallic carbon to iron trades its great ductility for the greater strength of an alloy called steel due to its veryhigh strength but still substantial toughness and its ability to be greatly altered by heat treatment steel is one of the most useful and common alloys in modern use by adding chromium to steel its resistance to corrosion can be enhanced creating stainless steel while adding silicon will alter its electrical characteristics producing silicon steel like oil and water a molten metal may not always mix with another element for example pure iron is almost completely insoluble with copper even when the constituents are soluble each will usually have a saturation point beyond which no more of the constituent can be added iron for example can hold a maximum of 667 carbon although the elements of an alloy usually must be soluble in the liquid state they may not always be soluble in the solid state if the metals remain soluble when solid the alloy forms a solid solution becoming a homogeneous structure consisting of identical crystals called a phase if as the mixture cools the constituents become insoluble they may separate to form two or more different types of crystals creating a heterogeneous microstructure of different phases some with more of one constituent than the other however in other alloys the insoluble elements may not separate until after crystallization occurs if cooled very quickly they first crystallize as a homogeneous phase but they are supersaturated with the secondary constituents as time passes the atoms of these supersaturated alloys can separate from the crystal lattice becoming more stable and forming a second phase that serves to reinforce the crystals internally some alloys such as electrum — an alloy of silver and gold — occur naturally meteorites are sometimes made of naturally occurring alloys of iron and nickel but are not native to the earth one of the first alloys made by humans was bronze which is a mixture of the metals tin and copper bronze was an extremely useful alloy to the ancients because it is much stronger and harder than either of its components steel was another common alloy however in ancient times it could only be created as an accidental byproduct from the heating of iron ore in fires smelting during the manufacture of iron other ancient alloys include pewter brass and pig iron in the modern age steel can be created in many forms carbon steel can be made by varying only the carbon content producing soft alloys like mild steel or hard alloys like spring steel alloy steels can be made by adding other elements such as chromium moly'</li></ul> |
| 20 | <ul><li>'##ky to edward said every word in my book is accurate and you cant just simply say its false without documenting it tell me one thing in the book now that is false amy goodman okay lets go to the book the case for israel 10000 on democracy now finkelstein replied to that specific challenge for material errors found in his book overall and dershowitz upped it to 25000 for another particular issue that they disputedfinkelstein referred to concrete facts which are not particularly controversial stating that in the case for israel dershowitz attributes to israeli historian benny morris the figure of between 2000 and 3000 palestinian arabs who fled their homes from april to june 1948 when the range in the figures presented by morris is actually 200000 to 300000dershowitz responded to finkelsteins reply by stating that such a mistake could not have been intentional as it harmed his own side of the debate obviously the phrase 2000 to 3000 arabs refers either to a subphase of the flight or is a typographical error in this particular context dershowitzs argument is that palestinians left as a result of orders issued by palestinian commanders if in fact 200000 were told to leave instead of 2000 that strengthens my argument considerably in his review of beyond chutzpah echoing finkelsteins criticisms michael desch political science professor at university of notre dame observed not only did dershowitz improperly present peterss ideas he may not even have bothered to read the original sources she used to come up with them finkelstein somehow managed to get uncorrected page proofs of the case for israel in which dershowitz appears to direct his research assistant to go to certain pages and notes in peterss book and place them in his footnotes directly 32 col 3 oxford academic avi shlaim had also been critical of dershowitz saying he believed that the charge of plagiarism is proved in a manner that would stand up in courtin deschs review of beyond chutzpah summarizing finkelsteins case against dershowitz for torturing the evidence particularly finkelsteins argument relating to dershowitzs citations of morris desch observed there are two problems with dershowitzs heavy reliance on morris the first is that morris is hardly the leftwing peacenik that dershowitz makes him out to be which means that calling him as a witness in israels defense is not very helpful to the case the more important problem is that many of the points dershowitz cites morris as supporting — that the early zionists wanted peaceful coexi'</li><li>'sees it as a steady evolution of british parliamentary institutions benevolently watched over by whig aristocrats and steadily spreading social progress and prosperity it described a continuity of institutions and practices since anglosaxon times that lent to english history a special pedigree one that instilled a distinctive temper in the english nation as whigs liked to call it and an approach to the world which issued in law and lent legal precedent a role in preserving or extending the freedoms of englishmenpaul rapin de thoyrass history of england published in 1723 became the classic whig history for the first half of the eighteenth century rapin claimed that the english had preserved their ancient constitution against the absolutist tendencies of the stuarts however rapins history lost its place as the standard history of england in the late 18th century and early 19th century to that of david humewilliam blackstones commentaries on the laws of england 1765 – 1769 reveals many whiggish traitsaccording to arthur marwick however henry hallam was the first whig historian publishing constitutional history of england in 1827 which greatly exaggerated the importance of parliaments or of bodies whig historians thought were parliaments while tending to interpret all political struggles in terms of the parliamentary situation in britain during the nineteenth century in terms that is of whig reformers fighting the good fight against tory defenders of the status quo in the history of england 1754 – 1761 hume challenged whig views of the past and the whig historians in turn attacked hume but they could not dent his history in the early 19th century some whig historians came to incorporate humes views dominant for the previous fifty years these historians were members of the new whigs around charles james fox 1749 – 1806 and lord holland 1773 – 1840 in opposition until 1830 and so needed a new historical philosophy fox himself intended to write a history of the glorious revolution of 1688 but only managed the first year of james iis reign a fragment was published in 1808 james mackintosh then sought to write a whig history of the glorious revolution published in 1834 as the history of the revolution in england in 1688 hume still dominated english historiography but this changed when thomas babington macaulay entered the field utilising fox and mackintoshs work and manuscript collections macaulays history of england was published in a series of volumes from 1848 to 1855 it proved an immediate success replacing humes history and becoming the new orthodoxy as if to introduce a linear progressive view of history the first chapter of macaulays history of england proposes the history of our country during the last hundred and sixty years is eminently the history of physical'</li><li>'the long nineteenth century is a term for the 125year period beginning with the onset of the french revolution in 1789 and ending with the outbreak of world war i in 1914 it was coined by russian writer ilya ehrenburg and later popularized by british marxist historian eric hobsbawm the term refers to the notion that the period reflects a progression of ideas which are characteristic to an understanding of the 19th century in europe the concept is an adaption of fernand braudels 1949 notion of le long seizieme siecle the long 16th century 1450 – 1640 and a recognized category of literary history although a period often broadly and diversely defined by different scholars numerous authors before and after hobsbawms 1995 publication have applied similar forms of book titles or descriptions to indicate a selective time frame for their works such as s ketterings french society 1589 – 1715 – the long seventeenth century e anthony wrigleys british population during the long eighteenth century 1680 – 1840 or d blackbourns the long nineteenth century a history of germany 1780 – 1918 however the term has been used in support of historical publications to connect with broader audiences and is regularly cited in studies and discussions across academic disciplines such as history linguistics and the arts hobsbawm lays out his analysis in the age of revolution europe 1789 – 1848 1962 the age of capital 1848 – 1875 1975 and the age of empire 1875 – 1914 1987 hobsbawm starts his long 19th century with the french revolution which sought to establish universal and egalitarian citizenship in france and ends it with the outbreak of world war i upon the conclusion of which in 1918 the longenduring european power balance of the 19th century proper 1801 – 1900 was eliminated in a sequel to the abovementioned trilogy the age of extremes the short twentieth century 1914 – 1991 1994 hobsbawm details the short 20th century a concept originally proposed by ivan t berend beginning with world war i and ending with the fall of the soviet union between 1914 – 1991a more generalized version of the long 19th century lasting from 1750 to 1914 is often used by peter n stearns in the context of the world history school in religious contexts specifically those concerning the history of the catholic church the long 19th century was a period of centralization of papal power over the catholic church this centralization was in opposition to the increasingly centralized nation states and contemporary revolutionary movements and used many of the same organizational and communication techniques as its rivals the churchs long 19th century extended from the french revolution 1789 until the death of pope pius xii 1958 this covers'</li></ul> |
| 13 | <ul><li>'of group musicmaking through the long development of the republic system developed and employed by members of the network band powerbooks unplugged republic is built into the supercollider language and allows participants to collaboratively write live code that is distributed across the network of computers there are similar efforts in other languages such as the distributed tuple space used in the impromptu language additionally overtone impromptu and extempore support multiuser sessions in which any number of programmers can intervene across the network in a given runtime process the practice of writing code in group can be done in the same room through a local network or from remote places accessing a common server terms like laptop band laptop orchestra collaborative live coding or collective live coding are used to frame a networked live coding practice both in a local or remote way toplap the temporarytransnationalterrestrialtransdimensional organisation for the promotionproliferationpermanencepurity of live algorithmaudioartartistic programming is an informal organization formed in february 2004 to bring together the various communities that had formed around live coding environments the toplap manifesto asserts several requirements for a toplap compliant performance in particular that performers screens should be projected and not hiddenonthefly promotes live coding practice since 2020 this is a project cofunded by the creative european program and run in hangar zkm ljudmila and creative code utrecht a number of research projects and research groups have been created to explore live coding often taking interdisciplinary approaches bridging the humanities and sciences first efforts to both develop live coding systems and embed the emerging field in the broader theoretical context happened in the research project artistic interactivity in hybrid networks from 2005 to 2008 funded by the german research foundationfurther the live coding research network was funded by the uk arts and humanities research council for two years from february 2014 supporting a range of activities including symposia workshops and an annual international conference called international conference on live coding iclc algorave — event where music andor visuals are generated from algorithms generally live coded demoscene — subculture around coding audiovisual presentations demos exploratory programming — the practice of building software as a way to understand its requirements and structure interactive programming — programming practice of using live coding in software development nime — academic and artistic conference on advances in music technology sometimes featuring live coding performances and research presentations andrews robert “ real djs code live ” wired online 7 march 2006 brown andrew r “ code jamming ” mc journal 96 december 2006 magnusson thor herding cats observing live coding in the wild computer music journal'</li><li>'##y the 1960s produced a strain of cybernetic art that was very much concerned with the shared circuits within and between the living and the technological a line of cybernetic art theory also emerged during the late 1960s writers like jonathan benthall and gene youngblood drew on cybernetics and cybernetic the most substantial contributors here were the british artist and theorist roy ascott with his essay behaviourist art and the cybernetic vision in the journal cybernetica 1966 – 67 and the american critic and theorist jack burnham in beyond modern sculpture from 1968 burnham builds cybernetic art into an extensive theory that centers on arts drive to imitate and ultimately reproduce life also in 1968 curator jasia reichardt organized the landmark exhibition cybernetic serendipity at the institute of contemporary art in london generative art is art that has been generated composed or constructed in an algorithmic manner through the use of systems defined by computer software algorithms or similar mathematical or mechanical or randomised autonomous processes sonia landy sheridan established generative systems as a program at the school of the art institute of chicago in 1970 in response to social change brought about in part by the computerrobot communications revolution the program which brought artists and scientists together was an effort at turning the artists passive role into an active one by promoting the investigation of contemporary scientific — technological systems and their relationship to art and life unlike copier art which was a simple commercial spinoff generative systems was actually involved in the development of elegant yet simple systems intended for creative use by the general population generative systems artists attempted to bridge the gap between elite and novice by directing the line of communication between the two thus bringing first generation information to greater numbers of people and bypassing the entrepreneur process art is an artistic movement as well as a creative sentiment and world view where the end product of art and craft the objet d ’ art is not the principal focus the process in process art refers to the process of the formation of art the gathering sorting collating associating and patterning process art is concerned with the actual doing art as a rite ritual and performance process art often entails an inherent motivation rationale and intentionality therefore art is viewed as a creative journey or process rather than as a deliverable or end product in the artistic discourse the work of jackson pollock is hailed as an antecedent process art in its employment of serendipity has a marked correspondence with dada change and transience are marked themes in the process art movement the guggenheim museum states that robert morris in 1968 had a groundbreaking exhibition and essay defining the movement and'</li><li>'music visualization or music visualisation a feature found in electronic music visualizers and media player software generates animated imagery based on a piece of music the imagery is usually generated and rendered in real time and in a way synchronized with the music as it is played visualization techniques range from simple ones eg a simulation of an oscilloscope display to elaborate ones which often include a number of composited effects the changes in the musics loudness and frequency spectrum are among the properties used as input to the visualization effective music visualization aims to attain a high degree of visual correlation between a musical tracks spectral characteristics such as frequency and amplitude and the objects or components of the visual image being rendered and displayed music visualization can be defined in contrast to previous existing pregenerated music plus visualization combinations as for example music videos by its characteristic as being realtime generated another possible distinction is seen by some in the ability of some music visualization systems such as geiss milkdrop to create different visualizations for each song or audio every time the program is run in contrast to other forms of music visualization such as music videos or a laser lighting display which always show the same visualization music visualization may be achieved in a 2d or a 3d coordinate system where up to six dimensions can be modified the 4th 5th and 6th dimensions being color intensity and transparency the first electronic music visualizer was the atari video music introduced by atari inc in 1976 and designed by the initiator of the home version of pong robert brown the idea was to create a visual exploration that could be implemented into a hifi stereo system in the united kingdom music visualization was first pioneered by fred judd music and audio players were available on early home computers sound to light generator 1985 infinite software used the zx spectrums cassette player for example the 1984 movie electric dreams prominently made use of one although as a pregenerated effect rather than calculated in realtime for pcdos one of the first modern music visualization programs was the opensource multiplatform cthugha in 1993 in the 1990s the emerging demo and tracker music scene pioneered the realtime technics for music visualization on the pc platform resulting examples are cubic player 1994 inertia player 1995 or in general their realtime generated demossubsequently pc computer music visualization became widespread in the mid to late 1990s as applications such as winamp 1997 audion 1999 and soundjam 2000 by 1999 there were several dozen freeware nontrivial music visualizers in distribution in particular milkdrop 2001 and its predecessor ge'</li></ul> |
| 33 | <ul><li>'a psychic detective is a person who investigates crimes by using purported paranormal psychic abilities examples have included postcognition the paranormal perception of the past psychometry information psychically gained from objects telepathy dowsing clairvoyance and remote viewing in murder cases psychic detectives may purport to be in communication with the spirits of the murder victims individuals claiming psychic abilities have stated they have helped police departments to solve crimes however there is a lack of police corroboration of their claims many police departments around the world have released official statements saying that they do not regard psychics as credible or useful on cases many prominent police cases often involving missing persons have received the attention of alleged psychics in november 2004 purported psychic sylvia browne told the mother of kidnapping victim amanda berry who had disappeared 19 months earlier shes not alive honey browne also claimed to have had a vision of berrys jacket in the garbage with dna on it berrys mother died two years later believing that her daughter had been killed berry was found alive in may 2013 having been a kidnapping victim of ariel castro along with michelle knight and gina dejesus after berry was found alive browne received criticism for the false declaration that berry was dead browne also became involved in the case of shawn hornbeck which received the attention of psychics after the elevenyearold went missing on 6 october 2002 browne appeared on the montel williams show and provided the parents of shawn hornbeck a detailed description of the abductor and where hornbeck could be found browne responded no when asked if he was still alive when hornbeck was found alive more than four years later few of the details given by browne were correct shawn hornbecks father craig akers has stated that brownes declaration was one of the hardest things that weve ever had to hear and that her misinformation diverted investigators wasting precious police timewhen washington dc intern chandra levy went missing on 1 may 2001 psychics from around the world provided tips suggesting that her body would be found in places such as the basement of a smithsonian storage building in the potomac river and buried in the nevada desert among many other possible locations each tip led nowhere a little more than a year after her disappearance levys body was accidentally discovered by a man walking his dog in a remote section of rock creek parkfollowing the disappearance of elizabeth smart on 5 june 2002 the police received as many as 9000 tips from psychics and others crediting visions and dreams as their source responding to these tips took many police hours according to salt lake city police chief lieutenant chris burbank yet elizabeth smarts father ed'</li><li>'telepathy and communication with the dead were impossible and that the mind of man cannot be read through telepathy but only by muscle reading in the late 19th century the creery sisters mary alice maud kathleen and emily were tested by the society for psychical research and believed to have genuine psychic ability however during a later experiment they were caught utilizing signal codes and they confessed to fraud george albert smith and douglas blackburn were claimed to be genuine psychics by the society for psychical research but blackburn confessed to fraud for nearly thirty years the telepathic experiments conducted by mr g a smith and myself have been accepted and cited as the basic evidence of the truth of thought transference the whole of those alleged experiments were bogus and originated in the honest desire of two youths to show how easily men of scientific mind and training could be deceived when seeking for evidence in support of a theory they were wishful to establish between 1916 and 1924 gilbert murray conducted 236 experiments into telepathy and reported 36 as successful however it was suggested that the results could be explained by hyperaesthesia as he could hear what was being said by the sender psychologist leonard t troland had carried out experiments in telepathy at harvard university which were reported in 1917 the subjects produced below chance expectationsarthur conan doyle and w t stead were duped into believing julius and agnes zancig had genuine psychic powers both doyle and stead wrote that zancigs performed telepathy in 1924 julius and agnes zancig confessed that their mind reading act was a trick and published the secret code and all the details of the trick method they had used under the title of our secrets in a london newspaperin 1924 robert h gault of northwestern university with gardner murphy conducted the first american radio test for telepathy the results were entirely negative one of their experiments involved the attempted thought transmission of a chosen number between one and onethousand out of 2010 replies none was correct this is below the theoretical chance figure of two correct replies in such a situationin february 1927 with the cooperation of the british broadcasting corporation bbc v j woolley who was at the time the research officer for the spr arranged a telepathy experiment in which radio listeners were asked to take part the experiment involved agents thinking about five selected objects in an office at tavistock square whilst listeners on the radio were asked to identify the objects from the bbc studio at savoy hill 24659 answers were received the results revealed no evidence of telepathya famous experiment in telepathy was recorded by the american author upton sinclair'</li><li>'bars by telekinesis he was tested in the 1970s but failed to produce any paranormal effects in scientifically controlled conditions he was tested on january 19 1977 during a twohour experiment in a paris laboratory directed by physicist yves farge a magician was also present girard failed to make any objects move paranormally he failed two tests in grenoble in june 1977 with magician james randi he was also tested on september 24 1977 at a laboratory at the nuclear research centre and failed to bend any bars or change the metals structure other experiments into spoonbending were also negative and witnesses described his feats as fraudulent girard later admitted he sometimes cheated to avoid disappointing the public but insisted he had genuine psychic power magicians and scientists have written that he produced all his alleged telekinetic feats through fraudulent meansstephen north a british psychic in the late 1970s was known for his alleged telekinetic ability to bend spoons and teleport objects in and out of sealed containers british physicist john hasted tested north in a series of experiments which he claimed had demonstrated telekinesis though his experiments were criticized for lack of scientific controls north was tested in grenoble on december 19 1977 in scientific conditions and the results were negative according to james randi during a test at birkbeck college north was observed to have bent a metal sample with his bare hands randi wrote i find it unfortunate that hasted never had an epiphany in which he was able to recognize just how thoughtless cruel and predatory were the acts perpetrated on him by fakers who took advantage of his naivety and trusttelekinesis parties were a cultural fad in the 1980s begun by jack houck where groups of people were guided through rituals and chants to awaken metalbending powers they were encouraged to shout at the items of cutlery they had brought and to jump and scream to create an atmosphere of pandemonium or what scientific investigators called heightened suggestibility critics were excluded and participants were told to avoid looking at their hands thousands of people attended these emotionally charged parties and many were convinced they had bent the objects by paranormal means 149 – 161 telekinesis parties have been described as a campaign by paranormal believers to convince people of the existence of telekinesis on the basis of nonscientific data from personal experience and testimony the united states national academy of sciences has criticized telekinesis parties on the grounds that conditions are not reliable for obtaining scientific results and are just those which psychologists and others have described as creating states of heightened suggest'</li></ul> |
| 7 | <ul><li>'an audiogram is a graph that shows the audible threshold for standardized frequencies as measured by an audiometer the y axis represents intensity measured in decibels db and the x axis represents frequency measured in hertz hz the threshold of hearing is plotted relative to a standardised curve that represents normal hearing in dbhl they are not the same as equalloudness contours which are a set of curves representing equal loudness at different levels as well as at the threshold of hearing in absolute terms measured in db spl sound pressure level the frequencies displayed on the audiogram are octaves which represent a doubling in frequency eg 250 hz 500 hz 1000 hz wtc commonly tested interoctave frequenices eg 3000 hz may also be displayed the intensities displayed on the audiogram appear as linear 10 dbhl steps however decibels are a logarithimic scale so that successive 10 db increments represent greater increases in loudness for humans normal hearing is between −10 dbhl and 15 dbhl although 0 db from 250 hz to 8 khz is deemed to be average normal hearing hearing thresholds of humans and other mammals can be found with behavioural hearing tests or physiological tests used in audiometry for adults a behavioural hearing test involves a tester who presents tones at specific frequencies pitches and intensities loudnesses when the testee hears the sound he or she responds eg by raising a hand or pressing a button the tester records the lowest intensity sound the testee can hear with children an audiologist makes a game out of the hearing test by replacing the feedback device with activityrelated toys such as blocks or pegs this is referred to as conditioned play audiometry visual reinforcement audiometry is also used with children when the child hears the sound he or she looks in the direction the sound came from and are reinforced with a light andor animated toy a similar technique can be used when testing some animals but instead of a toy food can be used as a reward for responding to the sound physiological tests do not need the patient to respond katz 2002 for example when performing the brainstem auditory evoked potentials the patients brainstem responses are being measured when a sound is played into their ear or otoacoustic emissions which are generated by a healthy inner ear either spontaneously or evoked by an outside stimulus in the us the niosh recommends that people who are regularly exposed to hazardous noise have their hearing tested once a year or every three years otherwise audiograms are produced using a piece of test equipment called an audiometer and this'</li><li>'##platinin addition to medications hearing loss can also result from specific chemicals in the environment metals such as lead solvents such as toluene found in crude oil gasoline and automobile exhaust for example and asphyxiants combined with noise these ototoxic chemicals have an additive effect on a persons hearing loss hearing loss due to chemicals starts in the high frequency range and is irreversible it damages the cochlea with lesions and degrades central portions of the auditory system for some ototoxic chemical exposures particularly styrene the risk of hearing loss can be higher than being exposed to noise alone the effects is greatest when the combined exposure include impulse noise a 2018 informational bulletin by the us occupational safety and health administration osha and the national institute for occupational safety and health niosh introduces the issue provides examples of ototoxic chemicals lists the industries and occupations at risk and provides prevention informationthere can be damage either to the ear whether the external or middle ear to the cochlea or to the brain centers that process the aural information conveyed by the ears damage to the middle ear may include fracture and discontinuity of the ossicular chain damage to the inner ear cochlea may be caused by temporal bone fracture people who sustain head injury are especially vulnerable to hearing loss or tinnitus either temporary or permanent sound waves reach the outer ear and are conducted down the ear canal to the eardrum causing it to vibrate the vibrations are transferred by the 3 tiny ear bones of the middle ear to the fluid in the inner ear the fluid moves hair cells stereocilia and their movement generates nerve impulses which are then taken to the brain by the cochlear nerve the auditory nerve takes the impulses to the brainstem which sends the impulses to the midbrain finally the signal goes to the auditory cortex of the temporal lobe to be interpreted as soundhearing loss is most commonly caused by longterm exposure to loud noises from recreation or from work that damage the hair cells which do not grow back on their ownolder people may lose their hearing from long exposure to noise changes in the inner ear changes in the middle ear or from changes along the nerves from the ear to the brain identification of a hearing loss is usually conducted by a general practitioner medical doctor otolaryngologist certified and licensed audiologist school or industrial audiometrist or other audiometric technician diagnosis of the cause of a hearing loss is carried out by a specialist physician audiovestibular physician or otorhinolaryngologist hearing loss'</li><li>'##anometry and speech audiometry may be helpful testing is performed by an audiologist there is no proven or recommended treatment or cure for snhl management of hearing loss is usually by hearing strategies and hearing aids in cases of profound or total deafness a cochlear implant is a specialised hearing aid that may restore a functional level of hearing snhl is at least partially preventable by avoiding environmental noise ototoxic chemicals and drugs and head trauma and treating or inoculating against certain triggering diseases and conditions like meningitis since the inner ear is not directly accessible to instruments identification is by patient report of the symptoms and audiometric testing of those who present to their doctor with sensorineural hearing loss 90 report having diminished hearing 57 report having a plugged feeling in ear and 49 report having ringing in ear tinnitus about half report vestibular vertigo problemsfor a detailed exposition of symptoms useful for screening a selfassessment questionnaire was developed by the american academy of otolaryngology called the hearing handicap inventory for adults hhia it is a 25question survey of subjective symptoms sensorineural hearing loss may be genetic or acquired ie as a consequence of disease noise trauma etc people may have a hearing loss from birth congenital or the hearing loss may come on later many cases are related to old age agerelated hearing loss can be inherited more than 40 genes have been implicated in the cause of deafness there are 300 syndromes with related hearing loss and each syndrome may have causative genesrecessive dominant xlinked or mitochondrial genetic mutations can affect the structure or metabolism of the inner ear some may be single point mutations whereas others are due to chromosomal abnormalities some genetic causes give rise to a late onset hearing loss mitochondrial mutations can cause snhl ie m1555ag which makes the individual sensitive to the ototoxic effects of aminoglycoside antibiotics the most common cause of recessive genetic congenital hearing impairment in developed countries is dfnb1 also known as connexin 26 deafness or gjb2related deafness the most common syndromic forms of hearing impairment include dominant stickler syndrome and waardenburg syndrome and recessive pendred syndrome and usher syndrome mitochondrial mutations causing deafness are rare mttl1 mutations cause midd maternally inherited deafness and diabetes and other conditions which may include deafness as part of the picture tmprss3 gene was identified by its association with both congenital and childhood onset autosomal recessive deafness this gene is expressed in fetal co'</li></ul> |
| 3 | <ul><li>'##ilise and suggest other technologies such as mobile phones or psion organisers as such feedback studies involve asynchronous communication between the participants and the researchers as the participants ’ data is recorded in their diary first and then passed on to the researchers once completefeedback studies are scalable that is a largescale sample can be used since it is mainly the participants themselves who are responsible for collecting and recording data in elicitation studies participants capture media as soon as the phenomenon occurs the media is usually in the form of a photograph but can be in other different forms as well and so the recording is generally quick and less effortful than feedback studies these media are then used as prompts and memory cues to elicit memories and discussion in interviews that take place much later as such elicitation studies involve synchronous communication between the participants and the researchers usually through interviewsin these later interviews the media and other memory cues such as what activities were done before and after the event can improve participants ’ episodic memory in particular photos were found to elicit more specific recall than all other media types there are two prominent tradeoffs between each type of study feedback studies involve answering questions more frequently and in situ therefore enabling more accurate recall but more effortful recording in contrast elicitation studies involve quickly capturing media in situ but answering questions much later therefore enabling less effortful recording but potentially inaccurate recall diary studies are most often used when observing behavior over time in a natural environment they can be beneficial when one is looking to find new qualitative and quantitative data advantages of diary studies are numerous they allow collecting longitudinal and temporal information reporting events and experiences in context and inthemoment participants to diary their behaviours thoughts and feelings inthemoment thereby minimising the potential for post rationalisation determining the antecedents correlations and consequences of daily experiences and behaviors there are some limitations of diary studies mainly due to their characteristics of reliance on memory and selfreport measures there is low control low participation and there is a risk of disturbing the action in feedback studies it can be troubling and disturbing to write everything down the validity of diary studies rests on the assumption that participants will accurately recall and record their experiences this is somewhat more easily enabled by the fact that diaries are completed media is captured in a natural environment and closer in realtime to any occurrences of the phenomenon of interest however there are multiple barriers to obtaining accurate data such as social desirability bias where participants may answer in a way that makes them appear more socially desirable this may be more prominent in longitudinal studies'</li><li>'turn killed by his relations and friends the moment a grey hair appears on his head all the noble savages wars with his fellowsavages and he takes no pleasure in anything else are wars of extermination — which is the best thing i know of him and the most comfortable to my mind when i look at him he has no moral feelings of any kind sort or description and his mission may be summed up as simply diabolical dickens ends his cultural criticism by reiterating his argument against the romanticized persona of the noble savage to conclude as i began my position is that if we have anything to learn from the noble savage it is what to avoid his virtues are a fable his happiness is a delusion his nobility nonsense we have no greater justification for being cruel to the miserable object than for being cruel to a william shakespeare or an isaac newton but he passes away before an immeasurably better and higher power than ever ran wild in any earthly woods and the world will be all the better when this place earth knows him no more in 1860 the physician john crawfurd and the anthropologist james hunt identified the racial stereotype of the noble savage as an example of scientific racism yet as advocates of polygenism — that each race is a distinct species of man — crawfurd and hunt dismissed the arguments of their opponents by accusing them of being proponents of rousseaus noble savage later in his career crawfurd reintroduced the noble savage term to modern anthropology and deliberately ascribed coinage of the term to jeanjacques rousseau in war before civilization the myth of the peaceful savage 1996 the archaeologist lawrence h keeley said that the widespread myth that civilized humans have fallen from grace from a simple primeval happiness a peaceful golden age is contradicted and refuted by archeologic evidence that indicates that violence was common practice in early human societies that the noble savage paradigm has warped anthropological literature to political ends moreover the anthropologist roger sandall likewise accused anthropologists of exalting the noble savage above civilized man by way of designer tribalism a form of romanticised primitivism that dehumanises indigenous peoples into the cultural stereotype of the indigene peoples who live a primitive way of life demarcated and limited by tradition which discouraged indigenous peoples from cultural assimilation into the dominant western culture in the prehistory of warfare misled by ethnography 2006 the researchers jonathan haas and matthew piscitelli challenged the idea that the human species is innately bellicose and that warfare is an occasional act'</li><li>'head a small terracotta sculpture of a head with a beard and europeanlike features was found in 1933 in the toluca valley 72 kilometres 45 mi southwest of mexico city in a burial offering under three intact floors of a precolonial building dated to between 1476 and 1510 the artifact has been studied by roman art authority bernard andreae director emeritus of the german institute of archaeology in rome italy and austrian anthropologist robert von heinegeldern both of whom stated that the style of the artifact was compatible with small roman sculptures of the 2nd century if genuine and if not placed there after 1492 the pottery found with it dates to between 1476 and 1510 the find provides evidence for at least a onetime contact between the old and new worldsaccording to arizona state universitys michael e smith a leading mesoamerican scholar named john paddock used to tell his classes in the years before he died that the artifact was planted as a joke by hugo moedano a student who originally worked on the site despite speaking with individuals who knew the original discoverer garcia payon and moedano smith says he has been unable to confirm or reject this claim though he remains skeptical smith concedes he cannot rule out the possibility that the head was a genuinely buried postclassic offering at calixtlahuaca henry i sinclair earl of orkney and feudal baron of roslin c 1345 – c 1400 was a scottish nobleman who is best known today from a modern legend which claims that he took part in explorations of greenland and north america almost 100 years before christopher columbuss voyages to the americas in 1784 he was identified by johann reinhold forster as possibly being the prince zichmni who is described in letters which were allegedly written around 1400 by the zeno brothers of venice in which they describe a voyage which they made throughout the north atlantic under the command of zichmni according to the dictionary of canadian biography online the zeno affair remains one of the most preposterous and at the same time one of the most successful fabrications in the history of explorationhenry was the grandfather of william sinclair 1st earl of caithness the builder of rosslyn chapel near edinburgh scotland the authors robert lomas and christopher knight believe some carvings in the chapel were intended to represent ears of new world corn or maize a crop unknown in europe at the time of the chapels construction knight and lomas view these carvings as evidence supporting the idea that henry sinclair traveled to the americas well before columbus in their book they discuss meeting with the wife of the botanist'</li></ul> |
| 21 | <ul><li>'##lenishes nitrogen and other critical nutrients cover crops also help to suppress weeds soilconservation farming involves notill farming green manures and other soilenhancing practices which make it hard for the soils to be equalized such farming methods attempt to mimic the biology of barren lands they can revive damaged soil minimize erosion encourage plant growth eliminate the use of nitrogen fertilizer or fungicide produce aboveaverage yields and protect crops during droughts or flooding the result is less labor and lower costs that increase farmers ’ profits notill farming and cover crops act as sinks for nitrogen and other nutrients this increases the amount of soil organic matterrepeated plowingtilling degrades soil killing its beneficial fungi and earthworms once damaged soil may take multiple seasons to fully recover even in optimal circumstancescritics argue that notill and related methods are impractical and too expensive for many growers partly because it requires new equipment they cite advantages for conventional tilling depending on the geography crops and soil conditions some farmers have contended that notill complicates pest control delays planting and that postharvest residues especially for corn are hard to manage the use of pesticides can contaminate the soil and nearby vegetation and water sources for a long time they affect soil structure and biotic and abiotic composition differentiated taxation schemes are among the options investigated in the academic literature to reducing their use salinity in soil is caused by irrigating with salty water water then evaporates from the soil leaving the salt behind salt breaks down the soil structure causing infertility and reduced growththe ions responsible for salination are sodium na potassium k calcium ca2 magnesium mg2 and chlorine cl− salinity is estimated to affect about one third of the earths arable land soil salinity adversely affects crop metabolism and erosion usually follows salinity occurs on drylands from overirrigation and in areas with shallow saline water tables overirrigation deposits salts in upper soil layers as a byproduct of soil infiltration irrigation merely increases the rate of salt deposition the bestknown case of shallow saline water table capillary action occurred in egypt after the 1970 construction of the aswan dam the change in the groundwater level led to high salt concentrations in the water table the continuous high level of the water table led to soil salination use of humic acids may prevent excess salination especially given excessive irrigation humic acids can fix both anions and cations and eliminate them from root zonesplanting species that can tolerate'</li><li>'in agriculture postharvest handling is the stage of crop production immediately following harvest including cooling cleaning sorting and packing the instant a crop is removed from the ground or separated from its parent plant it begins to deteriorate postharvest treatment largely determines final quality whether a crop is sold for fresh consumption or used as an ingredient in a processed food product the most important goals of postharvest handling are keeping the product cool to avoid moisture loss and slow down undesirable chemical changes and avoiding physical damage such as bruising to delay spoilage sanitation is also an important factor to reduce the possibility of pathogens that could be carried by fresh produce for example as residue from contaminated washing water after the field postharvest processing is usually continued in a packing house this can be a simple shed providing shade and running water or a largescale sophisticated mechanised facility with conveyor belts automated sorting and packing stations walkin coolers and the like in mechanised harvesting processing may also begin as part of the actual harvest process with initial cleaning and sorting performed by the harvesting machinery initial postharvest storage conditions are critical to maintaining quality each crop has an optimum range of storage temperature and humidity also certain crops cannot be effectively stored together as unwanted chemical interactions can result various methods of highspeed cooling and sophisticated refrigerated and atmospherecontrolled environments are employed to prolong freshness particularly in largescale operations once harvested vegetables and fruits are subject to the active process of degradation numerous biochemical processes continuously change the original composition of the crop until it becomes unmarketable the period during which consumption is considered acceptable is defined as the time of postharvest shelf lifepostharvest shelf life is typically determined by objective methods that determine the overall appearance taste flavor and texture of the commodity these methods usually include a combination of sensorial biochemical mechanical and colorimetric optical measurements a recent study attempted and failed to discover a biochemical marker and fingerprint methods as indices for freshness postharvest physiology is the scientific study of the plant physiology of living plant tissues after picking it has direct applications to postharvest handling in establishing the storage and transport conditions that best prolong shelf life an example of the importance of the field to postharvest handling is the discovery that ripening of fruit can be delayed and thus their storage prolonged by preventing fruit tissue respiration this insight allowed scientists to bring to bear their knowledge of the fundamental principles and mechanisms of respiration leading to postharvest storage techniques such as cold storage gaseous storage and'</li><li>'cultivated plant taxonomy is the study of the theory and practice of the science that identifies describes classifies and names cultigens — those plants whose origin or selection is primarily due to intentional human activity cultivated plant taxonomists do however work with all kinds of plants in cultivation cultivated plant taxonomy is one part of the study of horticultural botany which is mostly carried out in botanical gardens large nurseries universities or government departments areas of special interest for the cultivated plant taxonomist include searching for and recording new plants suitable for cultivation plant hunting communicating with and advising the general public on matters concerning the classification and nomenclature of cultivated plants and carrying out original research on these topics describing the cultivated plants of particular regions horticultural floras maintaining databases herbaria and other information about cultivated plants much of the work of the cultivated plant taxonomist is concerned with the naming of plants as prescribed by two plant nomenclatural codes the provisions of the international code of nomenclature for algae fungi and plants botanical code serve primarily scientific ends and the objectives of the scientific community while those of the international code of nomenclature for cultivated plants cultivated plant code are designed to serve both scientific and utilitarian ends by making provision for the names of plants used in commerce — the cultigens that have arisen in agriculture forestry and horticulture these names sometimes called variety names are not in latin but are added onto the scientific latin names and they assist communication among the community of foresters farmers and horticulturists the history of cultivated plant taxonomy can be traced from the first plant selections that occurred during the agrarian neolithic revolution to the first recorded naming of human plant selections by the romans the naming and classification of cultigens followed a similar path to that of all plants until the establishment of the first cultivated plant code in 1953 which formally established the cultigen classification category of cultivar since that time the classification and naming of cultigens has followed its own path cultivated plant taxonomy has been distinguished from the taxonomy of other plants in at least five ways firstly there is a distinction made according to where the plants are growing — that is whether they are wild or cultivated this is alluded to by the cultivated plant code which specifies in its title that it is dealing with cultivated plants secondly a distinction is made according to how the plants originated this is indicated in principle 2 of the cultivated plant code which defines the scope of the code as plants whose origin or selection is primarily due to the intentional actions of mankind — plants that have evolved under natural selection with human assistance thirdly cultivated plant taxonomy is concerned with plant variation that requires the use of special classification'</li></ul> |
| 32 | <ul><li>'starting point of calculation for simplification it is also common to constrain the first component of the jones vectors to be a real number this discards the overall phase information that would be needed for calculation of interference with other beams note that all jones vectors and matrices in this article employ the convention that the phase of the light wave is given by [UNK] k z − ω t displaystyle phi kzomega t a convention used by hecht under this convention increase in [UNK] x displaystyle phi x or [UNK] y displaystyle phi y indicates retardation delay in phase while decrease indicates advance in phase for example a jones vectors component of i displaystyle i e i π 2 displaystyle eipi 2 indicates retardation by π 2 displaystyle pi 2 or 90 degree compared to 1 e 0 displaystyle e0 collett uses the opposite definition for the phase [UNK] ω t − k z displaystyle phi omega tkz also collet and jones follow different conventions for the definitions of handedness of circular polarization jones convention is called from the point of view of the receiver while colletts convention is called from the point of view of the source the reader should be wary of the choice of convention when consulting references on the jones calculus the following table gives the 6 common examples of normalized jones vectors a general vector that points to any place on the surface is written as a ket ψ ⟩ displaystyle psi rangle when employing the poincare sphere also known as the bloch sphere the basis kets 0 ⟩ displaystyle 0rangle and 1 ⟩ displaystyle 1rangle must be assigned to opposing antipodal pairs of the kets listed above for example one might assign 0 ⟩ displaystyle 0rangle h ⟩ displaystyle hrangle and 1 ⟩ displaystyle 1rangle v ⟩ displaystyle vrangle these assignments are arbitrary opposing pairs are h ⟩ displaystyle hrangle and v ⟩ displaystyle vrangle d ⟩ displaystyle drangle and a ⟩ displaystyle arangle r ⟩ displaystyle rrangle and l ⟩ displaystyle lrangle the polarization of any point not equal to r ⟩ displaystyle rrangle or l ⟩ displaystyle lrangle and not on the circle that passes through h ⟩ d ⟩ v ⟩ a ⟩ displaystyle hrangle drangle vrangle arangle is known as elliptical polarization the jones matrices are operators that act on the jones vectors defined above these matrices are implemented by various optical elements such as lenses beam splitters mirrors etc each matrix represents projection onto a onedimensional'</li><li>'gloss is an optical property which indicates how well a surface reflects light in a specular mirrorlike direction it is one of the important parameters that are used to describe the visual appearance of an object other categories of visual appearance related to the perception of regular or diffuse reflection and transmission of light have been organized under the concept of cesia in an order system with three variables including gloss among the involved aspects the factors that affect gloss are the refractive index of the material the angle of incident light and the surface topography apparent gloss depends on the amount of specular reflection – light reflected from the surface in an equal amount and the symmetrical angle to the one of incoming light – in comparison with diffuse reflection – the amount of light scattered into other directions when light illuminates an object it interacts with it in a number of ways absorbed within it largely responsible for colour transmitted through it dependent on the surface transparency and opacity scattered from or within it diffuse reflection haze and transmission specularly reflected from it glossvariations in surface texture directly influence the level of specular reflection objects with a smooth surface ie highly polished or containing coatings with finely dispersed pigments appear shiny to the eye due to a large amount of light being reflected in a specular direction whilst rough surfaces reflect no specular light as the light is scattered in other directions and therefore appears dull the image forming qualities of these surfaces are much lower making any reflections appear blurred and distorted substrate material type also influences the gloss of a surface nonmetallic materials ie plastics etc produce a higher level of reflected light when illuminated at a greater illumination angle due to light being absorbed into the material or being diffusely scattered depending on the colour of the material metals do not suffer from this effect producing higher amounts of reflection at any angle the fresnel formula gives the specular reflectance r s displaystyle rs for an unpolarized light of intensity i 0 displaystyle i0 at angle of incidence i displaystyle i giving the intensity of specularly reflected beam of intensity i r displaystyle ir while the refractive index of the surface specimen is m displaystyle m the fresnel equation is given as follows r s i r i 0 displaystyle rsfrac iri0 r s 1 2 cos i − m 2 − sin 2 i cos i m 2 − sin 2 i 2 m 2 cos i − m 2 − sin 2 i m 2 cos i m 2 − sin 2 i 2 displaystyle rsfrac 12leftleftfrac cos isqrt m2sin'</li><li>'the black surroundings as compared to that with white surface and surroundings pfund was also the first to suggest that more than one method was needed to analyze gloss correctly in 1937 hunter as part of his research paper on gloss described six different visual criteria attributed to apparent gloss the following diagrams show the relationships between an incident beam of light i a specularly reflected beam s a diffusely reflected beam d and a nearspecularly reflected beam b specular gloss – the perceived brightness and the brilliance of highlights defined as the ratio of the light reflected from a surface at an equal but opposite angle to that incident on the surface sheen – the perceived shininess at low grazing angles defined as the gloss at grazing angles of incidence and viewing contrast gloss – the perceived brightness of specularly and diffusely reflecting areas defined as the ratio of the specularly reflected light to that diffusely reflected normal to the surface absence of bloom – the perceived cloudiness in reflections near the specular direction defined as a measure of the absence of haze or a milky appearance adjacent to the specularly reflected light haze is the inverse of absenceofbloom distinctness of image gloss – identified by the distinctness of images reflected in surfaces defined as the sharpness of the specularly reflected light surface texture gloss – identified by the lack of surface texture and surface blemishesdefined as the uniformity of the surface in terms of visible texture and defects orange peel scratches inclusions etc a surface can therefore appear very shiny if it has a welldefined specular reflectance at the specular angle the perception of an image reflected in the surface can be degraded by appearing unsharp or by appearing to be of low contrast the former is characterised by the measurement of the distinctnessofimage and the latter by the haze or contrast gloss in his paper hunter also noted the importance of three main factors in the measurement of gloss the amount of light reflected in the specular direction the amount and way in which the light is spread around the specular direction the change in specular reflection as the specular angle changesfor his research he used a glossmeter with a specular angle of 45° as did most of the first photoelectric methods of that type later studies however by hunter and judd in 1939 on a larger number of painted samples concluded that the 60 degree geometry was the best angle to use so as to provide the closest correlation to a visual observation standardisation in gloss measurement was led by hunter and astm american society for testing and materials who produced astm d523 standard'</li></ul> |
| 19 | <ul><li>'to neurological dysfunction and other health problemsthis condition is inherited in an autosomal recessive pattern which means both copies of the gene have the mutation the parents of an individual with an autosomal recessive condition each carry one copy of the mutated gene but they typically do not show signs and symptoms of the condition diagnosis of this disorder depends on blood tests demonstrating the absence of serum ceruloplasmin combined with low serum copper concentration low serum iron concentration high serum ferritin concentration or increased hepatic iron concentration mri scans can also confirm a diagnosis abnormal low intensities can indicate iron accumulation in the brain children of affected individuals are obligate carriers for aceruloplasminemia if the cp mutations has been identified in a related individual prenatal testing is recommended siblings of those affected by the disease are at a 25 of aceruloplasminemia in asymptomatic siblings serum concentrations of hemoglobin and hemoglobin a1c should be monitoredto prevent the progression of symptoms of the disease annual glucose tolerance tests beginning in early teen years to evaluate the onset of diabetes mellitus those at risk should avoid taking iron supplements treatment includes the use of iron chelating agents such as desferrioxamine to lower brain and liver iron stores and to prevent progression of neurologic symptoms this combined with freshfrozen human plasma ffp works effectively in decreasing liver iron content repetitive use of ffp can even improve neurologic symptoms antioxidants such as vitamin e can be used simultaneously to prevent tissue damage to the liver and pancreas human iron metabolism iron overload disorder'</li><li>'a bile duct is any of a number of long tubelike structures that carry bile and is present in most vertebrates bile is required for the digestion of food and is secreted by the liver into passages that carry bile toward the hepatic duct it joins the cystic duct carrying bile to and from the gallbladder to form the common bile duct which then opens into the intestine the top half of the common bile duct is associated with the liver while the bottom half of the common bile duct is associated with the pancreas through which it passes on its way to the intestine it opens into the part of the intestine called the duodenum via the ampulla of vater the biliary tree see below is the whole network of various sized ducts branching through the liver the path is as follows bile canaliculi → canals of hering → interlobular bile ducts → intrahepatic bile ducts → left and right hepatic ducts merge to form → common hepatic duct exits liver and joins → cystic duct from gall bladder forming → common bile duct → joins with pancreatic duct → forming ampulla of vater → enters duodenum inflation of a balloon in the bile duct causes through the vagus nerve activation of the brain stem and the insular cortex prefrontal cortex and somatosensory cortex blockage or obstruction of the bile duct by gallstones scarring from injury or cancer prevents the bile from being transported to the intestine and the active ingredient in the bile bilirubin instead accumulates in the blood this condition results in jaundice where the skin and eyes become yellow from the bilirubin in the blood this condition also causes severe itchiness from the bilirubin deposited in the tissues in certain types of jaundice the urine will be noticeably darker and the stools will be much paler than usual this is caused by the bilirubin all going to the bloodstream and being filtered into the urine by the kidneys instead of some being lost in the stools through the ampulla of vater jaundice jaundice is commonly caused by conditions such as pancreatic cancer which causes blockage of the bile duct passing through the cancerous portion of the pancreas cholangiocarcinoma cancer of the bile ducts blockage by a stone in patients with gallstones and from scarring after injury to the bile duct during gallbladder removal drainage biliary drainage is performed with a'</li><li>'##ing of skin and higher than normal gamma glutamyl transferase and alkaline phosphatase laboratory values they are in most cases located in the right hepatic lobe and are frequently seen as a single lesion their size ranges from 1 to 30 cm they can be difficult to diagnosis with imaging studies alone because it can be hard to tell the difference between hepatocellular adenoma focal nodular hyperplasia and hepatocellular carcinoma molecular categorization via biopsy and pathological analysis aids in both diagnosis and understanding prognosis particularly because hepatocellular adenomas have the potential to become malignant it is important to note percutaneous biopsy should be avoided because this method can lead to bleeding or rupture of the adenoma the best way to biopsy suspected hepatic adenoma is via open or laparoscopic excisional biopsybecause hepatocellular adenomas are so rare there are no clear guidelines for the best course of treatment the complications which include malignant transformation spontaneous hemorrhage and rupture are considered when determining the treatment approach estimates indicate approximately 2040 of hepatocellular adenomas will undergo spontaneous hemorrhage the evidence is not well elucidated but the best available data suggests that the risk of hepatocellular adenoma becoming hepatocellular carcinoma which is malignant liver tumor is 42 of all cases transformation to hepatocellular carcinoma is more common in men currently if the hepatic adenoma is 5 cm increasing in size symptomatic lesions has molecular markers associated with hcc transformation rising level of liver tumor markers such as alpha fetoprotein the patient is a male or has a glycogen storage disorder the adenoma is recommended to be surgically removed like most liver tumors the anatomy and location of the adenoma determines whether the tumor can removed laparoscopically or if it requires an open surgical procedure hepatocellular adenomas are also known to decrease in size when there is decreased estrogen or steroids eg when estrogencontaining contraceptives steroids are stopped or postpartumwomen of childbearing age with hepatic adenomas were previously recommended to avoid becoming pregnant altogether however currently a more individualized approach is recommended that takes into account the size of the adenoma and whether surgical resection is possible prior to becoming pregnant currently there is a clinical trial called the pregnancy and liver adenoma management palm study that'</li></ul> |
| 36 | <ul><li>'actions they refer to for example buzz hullabaloo bling opening statement — first part of discourse should gain audiences attention orator — a public speaker especially one who is eloquent or skilled oxymoron — opposed or markedly contradictory terms joined for emphasis panegyric — a formal public speech delivered in high praise of a person or thing paradeigma — argument created by a list of examples that leads to a probable generalized idea paradiastole — redescription usually in a better light paradox — an apparently absurd or selfcontradictory statement or proposition paralipsis — a form of apophasis when a rhetor introduces a subject by denying it should be discussed to speak of someone or something by claiming not to parallelism — the correspondence in sense or construction of successive clauses or passages parallel syntax — repetition of similar sentence structures paraprosdokian — a sentence in which the latter half takes an unexpected turn parataxis — using juxtaposition of short simple sentences to connect ideas as opposed to explicit conjunction parenthesis — an explanatory or qualifying word clause or sentence inserted into a passage that is not essential to the literal meaning parody — comic imitation of something or somebody paronomasia — a pun a play on words often for humorous effect pathos — the emotional appeal to an audience in an argument one of aristotles three proofs periphrasis — the substitution of many or several words where one would suffice usually to avoid using that particular word personification — a figure of speech that gives human characteristics to inanimate objects or represents an absent person as being present for example but if this invincible city should now give utterance to her voice would she not speak as follows rhetorica ad herennium petitio — in a letter an announcement demand or request philippic — a fiery damning speech delivered to condemn a particular political actor the term is derived from demostheness speeches in 351 bc denouncing the imperialist ambitions of philip of macedon which later came to be known as the philippics phronesis — practical wisdom common sense pistis — the elements to induce true judgment through enthymemes hence to give proof of a statement pleonasm — the use of more words than necessary to express an idea polyptoton — the repetition of a word or root in different cases or inflections within the same sentence polysemy — the capacity of a word or phrase to render more than one meaning polysyndeton — the repeated use of conjunctions within'</li><li>'a workable body of law thus canadas legal system may have more potential for conflicts with regards to the accusation of judicial activism as compared to the united statesformer chief justice of the supreme court of canada beverley mclachlin has stated that the charge of judicial activism may be understood as saying that judges are pursuing a particular political agenda that they are allowing their political views to determine the outcome of cases before them it is a serious matter to suggest that any branch of government is deliberately acting in a manner that is inconsistent with its constitutional role1such accusations often arise in response to rulings involving the canadian charter of rights and freedoms specifically rulings that have favoured the extension of gay rights have prompted accusations of judicial activism justice rosalie abella is a particularly common target of those who perceive activism on the supreme court of canada benchthe judgment chaoulli v quebec 2005 1 rcs which declared unconstitutional the prohibition of private healthcare insurance and challenged the principle of canadian universal health care in quebec was deemed by many as a prominent example of judicial activism the judgment was written by justice deschamps with a tight majority of 4 against 3 in the cassis de dijon case the european court of justice ruled the german laws prohibiting sales of liquors with alcohol percentages between 15 and 25 conflicted with eu laws this ruling confirmed that eu law has primacy over memberstate law when the treaties are unclear they leave room for the court to interpret them in different ways when eu treaties are negotiated it is difficult to get all governments to agree on a clear set of laws in order to get a compromise governments agree to leave a decision on an issue to the courtthe court can only practice judicial activism to the extent the eu governments leave room for interpretation in the treatiesthe court makes important rulings that set the agenda for further eu integration but it cannot happen without the consensual support of the memberstatesin the irish referendum on the lisbon treaty many issues not directly related to the treaty such as abortion were included in the debate because of worries that the lisbon treaty will enable the european court of justice to make activist rulings in these areas after the rejection of the lisbon treaty in ireland the irish government received concessions from the rest of the member states of the european union to make written guarantees that the eu will under no circumstances interfere with irish abortion taxation or military neutrality ireland voted on the lisbon treaty a second time in 2009 with a 6713 majority voting yes to the treaty india has a recent history of judicial activism originating after the emergency in india which saw attempts by the government to control the judiciary public interest'</li><li>'within the field of rhetoric the contributions of female rhetoricians have often been overlooked anthologies comprising the history of rhetoric or rhetoricians often leave the impression there were none throughout history however there have been a significant number of women rhetoricians [UNK] — the act of looking back of seeing with fresh eyes of entering an old text from a new critical direction — is for women more than a chapter in cultural history it is an act of survival adrienne rich the following is a timeline of contributions made to the field of rhetoric by women aspasia c 410 bc was a milesian woman who was known and highly regarded for her teaching of political theory and rhetoric she is mentioned in platos memexenus and is often credited with teaching the socratic method to socrates diotima of mantinea 4th century bc is an important character in platos symposium it is uncertain if she was a real person or perhaps a character modelled after aspasia for whom plato had much respect julian of norwich 1343 – 1415 english mystic who challenged the teachings of medieval christianity in regard to womens inferior role in religionrevelations of divine lovecatherine of siena 1347 – 1380 italian who was influential through her writings to men and women in authority where she begged for peace in italy and for the return of the papacy to rome she was canonized in 1461 by pope pius iiletter 83 to mona lapa her mother in siena 1376christine de pizan 1365 – 1430 venetian who moved to france at an early age she was influential as a writer rhetorician and critic during the medieval period and was europes first female professional authorthe book of the city of ladies 1404margery kempe 1373 – 1439 british woman who could neither read nor write but dictated her life story the book of margery kempe after receiving a vision of christ during the birth of the first of her fourteen children from the 15th century kempe was viewed as a holy woman after her book was published in pamphlet form with any thought or behavior that could be viewed as nonconforming or unorthodox removed when the original was rediscovered in 1934 a more complex selfportrait emergedthe book of margery kempe 1436 laura cereta 1469 – 1499 italian humanist and feminist who was influential in the letters she wrote to other intellectuals through her letters she fought for womens right to education and against the oppression of married womenletter to bibulus sempronius defense of the liberal instruction of women 1488 margaret fell 1614'</li></ul> |
| 42 | <ul><li>'virus siv a virus similar to hiv is capable of infecting primates the epstein – barr virus ebv is one of eight known herpesviruses it displays host tropism for human b cells through the cd21gp350220 complex and is thought to be the cause of infectious mononucleosis burkitts lymphoma hodgkins disease nasopharyngeal carcinoma and lymphomas ebv enters the body through oral transfer of saliva and it is thought to infect more than 90 of the worlds adult population ebv may also infect epithelial cells t cells and natural killer cells through mechanisms different than the cd21 receptormediated process in b cells the zika virus is a mosquitoborne arbovirus in the genus flavivirus that exhibits tropism for the human maternal decidua the fetal placenta and the umbilical cord on the cellular level the zika virus targets decidual macrophages decidual fibroblasts trophoblasts hofbauer cells and mesenchymal stem cells due to their increased capacity to support virion replication in adults infection by the zika virus may lead to zika fever and if the infection occurs during the first trimester of pregnancy neurological complications such as microcephaly may occur mycobacterium tuberculosis is a humantropic bacterium that causes tuberculosis the second most common cause of death due to an infectious agent the cell envelope glycoconjugates surrounding m tuberculosis allow the bacteria to infect human lung tissue while providing an intrinsic resistance to pharmaceuticals m tuberculosis enters the lung alveoler passages through aerosol droplets and it then becomes phagocytosed by macrophages however since the macrophages are unable to completely kill m tuberculosis granulomas are formed within the lungs providing an ideal environment for continued bacterial colonization more than an estimated 30 of the world population is colonized by staphylococcus aureus a microorganism capable of causing skin infections nosocomial infections and food poisoning due to its tropism for human skin and soft tissue the s aureus clonal complex cc121 is known to exhibit multihost tropism for both humans and rabbits this is thought to be due to a single nucleotide mutation that evolved the cc121 complex into st121 clonal complex the clone capable of infecting rabbits enteropathogenic and enterohaemorrhagic escherichia'</li><li>'all oncoviruses are dna viruses some rna viruses have also been associated such as the hepatitis c virus as well as certain retroviruses eg human tlymphotropic virus htlv1 and rous sarcoma virus rsv estimated percent of new cancers attributable to the virus worldwide in 2002 na indicates not available the association of other viruses with human cancer is continually under research the main viruses associated with human cancers are the human papillomavirus the hepatitis b and hepatitis c viruses the epstein – barr virus the human tlymphotropic virus the kaposis sarcomaassociated herpesvirus kshv and the merkel cell polyomavirus experimental and epidemiological data imply a causative role for viruses and they appear to be the second most important risk factor for cancer development in humans exceeded only by tobacco usage the mode of virally induced tumors can be divided into two acutely transforming or slowly transforming in acutely transforming viruses the viral particles carry a gene that encodes for an overactive oncogene called viraloncogene vonc and the infected cell is transformed as soon as vonc is expressed in contrast in slowly transforming viruses the virus genome is inserted especially as viral genome insertion is an obligatory part of retroviruses near a protooncogene in the host genome the viral promoter or other transcription regulation elements in turn cause overexpression of that protooncogene which in turn induces uncontrolled cellular proliferation because viral genome insertion is not specific to protooncogenes and the chance of insertion near that protooncogene is low slowly transforming viruses have very long tumor latency compared to acutely transforming viruses which already carry the viral oncogenehepatitis viruses including hepatitis b and hepatitis c can induce a chronic viral infection that leads to liver cancer in 047 of hepatitis b patients per year especially in asia less so in north america and in 14 of hepatitis c carriers per year liver cirrhosis whether from chronic viral hepatitis infection or alcoholism is associated with the development of liver cancer and the combination of cirrhosis and viral hepatitis presents the highest risk of liver cancer development worldwide liver cancer is one of the most common and most deadly cancers due to a huge burden of viral hepatitis transmission and diseasethrough advances in cancer research vaccines designed to prevent cancer have been created the hepatitis b vaccine is the first vaccine that has been established to prevent cancer hepatocellular carcinoma by preventing infection with the causative'</li><li>'gisaid the global initiative on sharing all influenza data previously the global initiative on sharing avian influenza data is a global science initiative established in 2008 to provide access to genomic data of influenza viruses the database was expanded to include the coronavirus responsible for the covid19 pandemic as well as other pathogens the database has been described as the worlds largest repository of covid19 sequences gisaid facilitates genomic epidemiology and realtime surveillance to monitor the emergence of new covid19 viral strains across the planetsince its establishment as an alternative to sharing avian influenza data via conventional publicdomain archives gisaid has facilitated the exchange of outbreak genome data during the h1n1 pandemic in 2009 the h7n9 epidemic in 2013 the covid19 pandemic and the 2022 – 2023 mpox outbreak since 1952 influenza strains had been collected by national influenza centers nics and distributed through the whos global influenza surveillance and response system gisrs countries provided samples to the who but the data was then shared with them for free with pharmaceutical companies who could patent vaccines produced from the samples beginning in january 2006 italian researcher ilaria capua refused to upload her data to a closed database and called for genomic data on h5n1 avian influenza to be in the public domain at a conference of the oiefao network of expertise on animal influenza capua persuaded participants to agree to each sequence and release data on 20 strains of influenza some scientists had concerns about sharing their data in case others published scientific papers using the data before them but capua dismissed this telling science what is more important another paper for ilaria capuas team or addressing a major health threat lets get our priorities straight peter bogner a german in his 40s based in the usa and who previously had no experience in public health read an article about capuas call and helped to found and fund gisaid bogner met nancy cox who was then leading the us centers for disease controls influenza division at a conference and cox went on to chair gisaids scientific advisory councilthe acronym gisaid was coined in a correspondence letter published in the journal nature in august 2006 putting forward an initial aspiration of creating a consortium for a new global initiative on sharing avian influenza data later all would replace avian whereby its members would release data in publicly available databases up to six months after analysis and validation initially the organisation collaborated with the australian nonprofit organization cambia and the creative commons project science commons although no essential ground rules for sharing were established the'</li></ul> |
| 2 | <ul><li>'the complex roots to any precision uspenskys algorithm of collins and akritas improved by rouillier and zimmermann and based on descartes rule of signs this algorithms computes the real roots isolated in intervals of arbitrary small width it is implemented in maple functions fsolve and rootfindingisolate there are at least four software packages which can solve zerodimensional systems automatically by automatically one means that no human intervention is needed between input and output and thus that no knowledge of the method by the user is needed there are also several other software packages which may be useful for solving zerodimensional systems some of them are listed after the automatic solvers the maple function rootfindingisolate takes as input any polynomial system over the rational numbers if some coefficients are floating point numbers they are converted to rational numbers and outputs the real solutions represented either optionally as intervals of rational numbers or as floating point approximations of arbitrary precision if the system is not zero dimensional this is signaled as an error internally this solver designed by f rouillier computes first a grobner basis and then a rational univariate representation from which the required approximation of the solutions are deduced it works routinely for systems having up to a few hundred complex solutions the rational univariate representation may be computed with maple function groebnerrationalunivariaterepresentation to extract all the complex solutions from a rational univariate representation one may use mpsolve which computes the complex roots of univariate polynomials to any precision it is recommended to run mpsolve several times doubling the precision each time until solutions remain stable as the substitution of the roots in the equations of the input variables can be highly unstable the second solver is phcpack written under the direction of j verschelde phcpack implements the homotopy continuation method this solver computes the isolated complex solutions of polynomial systems having as many equations as variables the third solver is bertini written by d j bates j d hauenstein a j sommese and c w wampler bertini uses numerical homotopy continuation with adaptive precision in addition to computing zerodimensional solution sets both phcpack and bertini are capable of working with positive dimensional solution sets the fourth solver is the maple library regularchains written by marc morenomaza and collaborators it contains various functions for solving polynomial systems by means of regular chains elimination theory systems of polynomial inequalities triangular decomposition wus method of characteristic set'</li><li>'##duality is the irrelevance of de morgans laws those laws are built into the syntax of the primary algebra from the outset the true nature of the distinction between the primary algebra on the one hand and 2 and sentential logic on the other now emerges in the latter formalisms complementationnegation operating on nothing is not wellformed but an empty cross is a wellformed primary algebra expression denoting the marked state a primitive value hence a nonempty cross is an operator while an empty cross is an operand because it denotes a primitive value thus the primary algebra reveals that the heretofore distinct mathematical concepts of operator and operand are in fact merely different facets of a single fundamental action the making of a distinction syllogisms appendix 2 of lof shows how to translate traditional syllogisms and sorites into the primary algebra a valid syllogism is simply one whose primary algebra translation simplifies to an empty cross let a denote a literal ie either a or a [UNK] displaystyle overline a indifferently then every syllogism that does not require that one or more terms be assumed nonempty is one of 24 possible permutations of a generalization of barbara whose primary algebra equivalent is a ∗ b [UNK] b [UNK] c ∗ [UNK] a ∗ c ∗ displaystyle overline a b overline overline b cbig a c these 24 possible permutations include the 19 syllogistic forms deemed valid in aristotelian and medieval logic this primary algebra translation of syllogistic logic also suggests that the primary algebra can interpret monadic and term logic and that the primary algebra has affinities to the boolean term schemata of quine 1982 part ii the following calculation of leibnizs nontrivial praeclarum theorema exemplifies the demonstrative power of the primary algebra let c1 be a [UNK] [UNK] displaystyle overline overline abig a c2 be a a b [UNK] a b [UNK] displaystyle a overline a ba overline b c3 be [UNK] a [UNK] displaystyle overline aoverline j1a be a [UNK] a [UNK] displaystyle overline a aoverline and let oi mean that variables and subformulae have been reordered in a way that commutativity and associativity permit the primary algebra embodies a point noted by huntington in 1933 boolean algebra requires in addition to one unary operation one and not two binary operations hence the seldomnoted fact that boolean algebra'</li><li>'##n and company 1925 pp 477ff reprinted 1958 by dover publications'</li></ul> |
| 39 | <ul><li>'boundaries at the flow extremes for a particular speed which are caused by different phenomena the steepness of the high flow part of a constant speed line is due to the effects of compressibility the position of the other end of the line is located by blade or passage flow separation there is a welldefined lowflow boundary marked on the map as a stall or surge line at which blade stall occurs due to positive incidence separation not marked as such on maps for turbochargers and gas turbine engines is a more gradually approached highflow boundary at which passages choke when the gas velocity reaches the speed of sound this boundary is identified for industrial compressors as overload choke sonic or stonewall the approach to this flow limit is indicated by the speed lines becoming more vertical other areas of the map are regions where fluctuating vane stalling may interact with blade structural modes leading to failure ie rotating stall causing metal fatigue different applications move over their particular map along different paths an example map with no operating lines is shown as a pictorial reference with the stallsurge line on the left and the steepening speed lines towards choke and overload on the right maps have similar features and general shape because they all apply to machines with spinning vanes which use similar principles for pumping a compressible fluid not all machines have stationary vanes centrifugal compressors may have either vaned or vaneless diffusers however a compressor operating as part of a gas turbine or turbocharged engine behaves differently to an industrial compressor because its flow and pressure characteristics have to match those of its driving turbine and other engine components such as power turbine or jet nozzle for a gas turbine and for a turbocharger the engine airflow which depends on engine speed and charge pressure a link between a gas turbine compressor and its engine can be shown with lines of constant engine temperature ratio ie the effect of fuellingincreased turbine temperature which raises the running line as the temperature ratio increases one manifestation of different behaviour appears in the choke region on the righthand side of a map it is a noload condition in a gas turbine turbocharger or industrial axial compressor but overload in an industrial centrifugal compressor hiereth et al shows a turbocharger compressor fullload or maximum fuelling curve runs up close to the surge line a gas turbine compressor fullload line also runs close to the surge line the industrial compressor overload is a capacity limit and requires high power levels to pass the high flow rates required excess power is available to inadvertently take the compressor beyond the overload limit to a hazardous condition'</li><li>'a thermodynamic instrument is any device for the measurement of thermodynamic systems in order for a thermodynamic parameter or physical quantity to be truly defined a technique for its measurement must be specified for example the ultimate definition of temperature is what a thermometer reads the question follows – what is a thermometer there are two types of thermodynamic instruments the meter and the reservoir a thermodynamic meter is any device which measures any parameter of a thermodynamic system a thermodynamic reservoir is a system which is so large that it does not appreciably alter its state parameters when brought into contact with the test system two general complementary tools are the meter and the reservoir it is important that these two types of instruments are distinct a meter does not perform its task accurately if it behaves like a reservoir of the state variable it is trying to measure if for example a thermometer were to act as a temperature reservoir it would alter the temperature of the system being measured and the reading would be incorrect ideal meters have no effect on the state variables of the system they are measuring a meter is a thermodynamic system which displays some aspect of its thermodynamic state to the observer the nature of its contact with the system it is measuring can be controlled and it is sufficiently small that it does not appreciably affect the state of the system being measured the theoretical thermometer described below is just such a meter in some cases the thermodynamic parameter is actually defined in terms of an idealized measuring instrument for example the zeroth law of thermodynamics states that if two bodies are in thermal equilibrium with a third body they are also in thermal equilibrium with each other this principle as noted by james maxwell in 1872 asserts that it is possible to measure temperature an idealized thermometer is a sample of an ideal gas at constant pressure from the ideal gas law the volume of such a sample can be used as an indicator of temperature in this manner it defines temperature although pressure is defined mechanically a pressuremeasuring device called a barometer may also be constructed from a sample of an ideal gas held at a constant temperature a calorimeter is a device which is used to measure and define the internal energy of a system some common thermodynamic meters are thermometer a device which measures temperature as described above barometer a device which measures pressure an ideal gas barometer may be constructed by mechanically connecting an ideal gas to the system being'</li><li>'a transcritical cycle is a closed thermodynamic cycle where the working fluid goes through both subcritical and supercritical states in particular for power cycles the working fluid is kept in the liquid region during the compression phase and in vapour andor supercritical conditions during the expansion phase the ultrasupercritical steam rankine cycle represents a widespread transcritical cycle in the electricity generation field from fossil fuels where water is used as working fluid other typical applications of transcritical cycles to the purpose of power generation are represented by organic rankine cycles which are especially suitable to exploit low temperature heat sources such as geothermal energy heat recovery applications or waste to energy plants with respect to subcritical cycles the transcritical cycle exploits by definition higher pressure ratios a feature that ultimately yields higher efficiencies for the majority of the working fluids considering then also supercritical cycles as a valid alternative to the transcritical ones the latter cycles are capable of achieving higher specific works due to the limited relative importance of the work of compression work this evidences the extreme potential of transcritical cycles to the purpose of producing the most power measurable in terms of the cycle specific work with the least expenditure measurable in terms of spent energy to compress the working fluid while in single level supercritical cycles both pressure levels are above the critical pressure of the working fluid in transcritical cycles one pressure level is above the critical pressure and the other is below in the refrigeration field carbon dioxide co2 is increasingly considered of interest as refrigerant in trascritical cycles the pressure of the working fluid at the outlet of the pump is higher than the critical pressure while the inlet conditions are close to the saturated liquid pressure at the given minimum temperature during the heating phase which is typically considered an isobaric process the working fluid overcomes the critical temperature moving thus from the liquid to the supercritical phase without the occurrence of any evaporation process a significant difference between subcritical and transcritical cycles due to this significant difference in the heating phase the heat injection into the cycle is significantly more efficient from a second law perspective since the average temperature difference between the hot source and the working fluid is reducedas a consequence the maximum temperatures reached by the cold source can be higher at fixed hot source characteristics therefore the expansion process can be accomplished exploiting higher pressure ratios which yields higher power production modern ultrasupercritical rankine cycles can reach maximum temperatures up to 620°c exploiting the optimized heat introduction process as in'</li></ul> |
| 27 | <ul><li>'area of research that is being looked into with regards to loc is with home security automated monitoring of volatile organic compounds vocs is a desired functionality for loc if this application becomes reliable these microdevices could be installed on a global scale and notify homeowners of potentially dangerous compounds labonachip devices could be used to characterize pollen tube guidance in arabidopsis thaliana specifically plant on a chip is a miniaturized device in which pollen tissues and ovules could be incubated for plant sciences studies biochemical assays dielectrophoresis detection of cancer cells and bacteria immunoassay detect bacteria viruses and cancers based on antigenantibody reactions ion channel screening patch clamp microfluidics microphysiometry organonachip realtime pcr detection of bacteria viruses and cancers testing the safety and efficacy of new drugs as with lung on a chip total analysis system booksgeschke klank telleman eds microsystem engineering of labonachip devices 1st ed john wiley sons isbn 3527307338 herold ke rasooly a eds 2009 labonachip technology fabrication and microfluidics caister academic press isbn 9781904455462 herold ke rasooly a eds 2009 labonachip technology biomolecular separation and analysis caister academic press isbn 9781904455479 yehya h ghallab wael badawy 2010 labonachip techniques circuits and biomedical applications artech house p 220 isbn 9781596934184 2012 gareth jenkins colin d mansfield eds methods in molecular biology – microfluidic diagnostics humana press isbn 9781627031332'</li><li>'mentioned before this poses extremely negative environmental implications while also demonstrating the high waste associated with conventional fertilizers on the other hand nanofertilizers are able to amend this issue because of their high absorption efficiency into the targeted plant which is owed to their remarkably high surface area to volume ratios in a study done on the use of phosphorus nanofertilizers absorption efficiencies of up to 906 were achieved making them a highly desirable fertilizer material another beneficial aspect of using nanofertilizers is the ability to provide slow release of nutrients into the plant over a 4050 day time period rather than the 410 day period of conventional fertilizers this again proves to be beneficial economically requiring less resources to be devoted to fertilizer transport and less amount of total fertilizer needed as expected with greater ability for nutrient uptake crops have been found to exhibit greater health when using nanofertilizers over conventional ones one study analyzed the effect of a potatospecific nano fertilizer composed of a variety of elements including k p n and mg in comparison to a control group using their conventional counterparts the study found that the potato crop which used the nanofertilizer had an increased crop yield in comparison to the control as well as more efficient water use and agronomic efficiency defined as units of yield increased per unit of nutrient applied in addition the study found that the nano fertilized potatoes had a higher nutrient content such as increased starch and ascorbic acid content another study analyzed the use of ironbased nanofertilizers in black eyed peas and determined that root stability increased dramatically in the use of nano fertilizer as well as chlorophyll content in leaves thus improving photosynthesis a different study found that zinc nanofertilizers enhanced photosynthesis rate in maize crops measured through soluble carbohydrate concentration likely as a result of the role of zinc in the photosynthesis processmuch work needs to be done in the future to make nanofertilizers a consistent viable alternative to conventional fertilizers effective legislation needs to be drafted regulating the use of nanofertilizers drafting standards for consistent quality and targeted release of nutrients further more studies need to be done to understand the full benefits and potential downsides of nanofertilizers to gain the full picture in approach of using nanotechnology to benefit agriculture in an everchanging world nanotechnology has played a pivotal role in the field of genetic engineering and plant transformations making it a desirable candidate in the optimization'</li><li>'##s graphene metals oxides soft materials up to microns nanocellulose polyelectrolyte including nanoparticles applications including thin film solar cells barrier coatings including antireflective coatings antimicrobial surfaces selfcleaning glass plasmonic metamaterials electroswitching surfaces layerbylayer assembly and graphene'</li></ul> |
| 24 | <ul><li>'in the wall street journals review of the best architecture of 2018 with julie v iovine writing that glenstones architecture takes an approach that offers a sequence of events revealed gradually with constantly shifting perspectives as opposed to classic modernisms tightly controlled image of architecture as geometric tableau in 2020 the expansion was a winner of the american institute of architects architecture awardsin 2019 glenstone opened a 7200squarefoot 670 m2 environmental center on its campus the building contains selfguided exhibits about recycling composting and reforestation the pavilions is built around the water court an 18000squarefoot 1700 m2 water garden containing thousands of aquatic plants such as waterlilies irises thalias cattails and rushes the water courts design was inspired by the reflecting pool at the brion cemetery in northern italy referring to the way the museum returns visitors to the water court samuel medina wrote for metropolis art isnt the heart of the glenstone museum which opened in october water is pulitzer prizewinning critic sebastian smee wrote of the water courtits as if youve entered a beautiful sanctuary possibly in another hemisphere maybe another era although youve descended you actually feel a kind of lift a buoyancy such as what birds must feel when they catch warm air currents you exhale you feel liberated from everyday cares youre ready for the art the expansion also added 130 acres 53 ha of land to the campus a landscape largely composed of woodland and wildflower meadows the landscaping was designed by landscape architect peter walkers firm pwp landscape architecture the effort included the planting of about 8000 trees the transplanting of 200 trees the converting lawn areas to meadows and the restoration of streams that flowed through the campus glenstones landscaping is managed using organic products only this outdoor space hosts large art installations by artists including jeff koons felix gonzaleztorres michael heizer and richard serra in a review for the washington post in 2018 philip kennicott wrote that glenstone is a mustsee museum and that its creators successfully integrate art architecture and landscape referring to the natural setting of the museum he wrote that everything is quietly spectacular with curated views to the outdoors that present nature as visual haiku kennicott tempered his review by mentioning that the museums distinctive architecture and layout continually confront visitors with strange visions that will make it interesting to see how it is receivedkriston capps of washington city paper called glenstones 2018 expansion successful and enchanting with a sublime viewing experience he wrote that the museums collection excels in its focus on conventional paintings sculptures and installations but excludes more modern media such as video or performance art concerning this conservative focus cap'</li><li>'the slope geotextiles have been used to protect the fossil hominid footprints of laetoli in tanzania from erosion rain and tree rootsin building demolition geotextile fabrics in combination with steel wire fencing can contain explosive debriscoir coconut fiber geotextiles are popular for erosion control slope stabilization and bioengineering due to the fabrics substantial mechanical strength app ie coir geotextiles last approximately 3 to 5 years depending on the fabric weight the product degrades into humus enriching the soil glacial retreat geotextiles with reflective properties are often used in protecting the melting glaciers in north italy they use geotextiles to cover the glaciers for protecting from the sun the reflective properties of the geotextile reflect the sun away from the melting glacier in order to slow the process however this process has proven to be more expensive than effective while many possible design methods or combinations of methods are available to the geotextile designer the ultimate decision for a particular application usually takes one of three directions design by cost and availability design by specification or design by function extensive literature on design methods for geotextiles has been published in the peer reviewed journal geotextiles and geomembranes geotextiles are needed for specific requirements just as anything else in the world some of these requirements consist of polymers composed of a minimum of 85 by weight polypropylene polyesters polyamides polyolefins and polyethylene geomembrane hard landscape materials polypropylene raffia sediment control john n w m 1987 geotextiles glasgow blackie publishing ltd koerner r m 2012 designing with geosynthetics 6th edition xlibris publishing co koerner r m ed 2016 geotextiles from design to applications amsterdam woodhead publishing co'</li><li>'society or the california native plant society which are made up of gardeners interested in growing plants local to their area state or country in the united states wild ones — native plants natural landscapes is a national organization with local chapters in many states new england wildflower society and lady bird johnson wildflower center provide information on native plants and promote natural landscaping these organizations can be the best resources for learning about and obtaining local native plants many members have spent years or decades cultivating local plants or bushwalking in local areas permaculture organic lawn management piet oudolf terroir wildlife gardening xeriscaping north american native plant society christopher thomas ed 2011 the new american landscape leading voices on the future of sustainable gardening timber press isbn 9781604691863 diekelmann john robert m schuster 2002 natural landscaping designing with native plant communities university of wisconsin press isbn 9780299173241 stein sara 1993 noahs garden restoring the ecology of our own back yards houghtonmifflin isbn 0395653738 stein sara 1997 planting noahs garden further adventures in backyard ecology houghtonmifflin isbn 9780395709603 tallamy douglas w 2007 bringing nature home how native plants sustain wildlife in our gardens timber press isbn 9780881928549 tallamy douglas w 2020 natures best hope a new approach to conservation that starts in your yard timber press isbn 9781604699005 wasowski andy and sally 2000 the landscaping revolution garden with mother nature not against her contemporary books isbn 9780809226658 wasowski sally 2001 gardening with prairie plants how to create beautiful native landscapes university of minnesota press isbn 0816630879'</li></ul> |
| 9 | <ul><li>'a circular chromosome is a chromosome in bacteria archaea mitochondria and chloroplasts in the form of a molecule of circular dna unlike the linear chromosome of most eukaryotes most prokaryote chromosomes contain a circular dna molecule – there are no free ends to the dna free ends would otherwise create significant challenges to cells with respect to dna replication and stability cells that do contain chromosomes with dna ends or telomeres most eukaryotes have acquired elaborate mechanisms to overcome these challenges however a circular chromosome can provide other challenges for cells after replication the two progeny circular chromosomes can sometimes remain interlinked or tangled and they must be resolved so that each cell inherits one complete copy of the chromosome during cell division the circular bacteria chromosome replication is best understood in the wellstudied bacteria escherichia coli and bacillus subtilis chromosome replication proceeds in three major stages initiation elongation and termination the initiation stage starts with the ordered assembly of initiator proteins at the origin region of the chromosome called oric these assembly stages are regulated to ensure that chromosome replication occurs only once in each cell cycle during the elongation phase of replication the enzymes that were assembled at oric during initiation proceed along each arm replichore of the chromosome in opposite directions away from the oric replicating the dna to create two identical copies this process is known as bidirectional replication the entire assembly of molecules involved in dna replication on each arm is called a replisome at the forefront of the replisome is a dna helicase that unwinds the two strands of dna creating a moving replication fork the two unwound single strands of dna serve as templates for dna polymerase which moves with the helicase together with other proteins to synthesise a complementary copy of each strand in this way two identical copies of the original dna are created eventually the two replication forks moving around the circular chromosome meet in a specific zone of the chromosome approximately opposite oric called the terminus region the elongation enzymes then disassemble and the two daughter chromosomes are resolved before cell division is completed the e coli origin of replication called oric consists of dna sequences that are recognised by the dnaa protein which is highly conserved amongst different bacterial species dnaa binding to the origin initiates the regulated recruitment of other enzymes and proteins that will eventually lead to the establishment of two complete replisomes for bidirectional replicationdna sequence elements within oric that are important for its function include dnaa boxes a 9mer repeat with a highly'</li><li>'methods are carried out on the distance matrices an important point is that the scale of data is extensive and further approaches must be taken to identify patterns from the available information tools used to analyze the data include vamps qiime mothur and dada2 or unoise3 for denoising metagenomics is also used extensively for studying microbial communities in metagenomic sequencing dna is recovered directly from environmental samples in an untargeted manner with the goal of obtaining an unbiased sample from all genes of all members of the community recent studies use shotgun sanger sequencing or pyrosequencing to recover the sequences of the reads the reads can then be assembled into contigs to determine the phylogenetic identity of a sequence it is compared to available full genome sequences using methods such as blast one drawback of this approach is that many members of microbial communities do not have a representative sequenced genome but this applies to 16s rrna amplicon sequencing as well and is a fundamental problem with shotgun sequencing it can be resolved by having a high coverage 50100x of the unknown genome effectively doing a de novo genome assembly as soon as there is a complete genome of an unknown organism available it can be compared phylogenetically and the organism put into its place in the tree of life by creating new taxa an emerging approach is to combine shotgun sequencing with proximityligation data hic to assemble complete microbial genomes without culturingdespite the fact that metagenomics is limited by the availability of reference sequences one significant advantage of metagenomics over targeted amplicon sequencing is that metagenomics data can elucidate the functional potential of the community dna targeted gene surveys cannot do this as they only reveal the phylogenetic relationship between the same gene from different organisms functional analysis is done by comparing the recovered sequences to databases of metagenomic annotations such as kegg the metabolic pathways that these genes are involved in can then be predicted with tools such as mgrast camera and imgm metatranscriptomics studies have been performed to study the gene expression of microbial communities through methods such as the pyrosequencing of extracted rna structure based studies have also identified noncoding rnas ncrnas such as ribozymes from microbiota metaproteomics is an approach that studies the proteins expressed by microbiota giving insight into its functional potential the human microbiome project launched in 2008 was a united states national institutes of health initiative to identify and characterize microorganisms found in both healthy and diseased humans'</li><li>'by crosslinking the cytoskeleton protein actin burkholderia pseudomallei and edwardsiella tarda are two other organisms which possess a t6ss that appears dedicated for eukaryotic targeting the t6ss of plant pathogen xanthomonas citri protects it from predatory amoeba dictyostelium discoideum a wide range of gramnegative bacteria have been shown to have antibacterial t6sss including opportunistic pathogens such as pseudomonas aeruginosa obligate commensal species that inhabit the human gut bacteroides spp and plantassociated bacteria such as agrobacterium tumefaciens these systems exert antibacterial activity via the function of their secreted substrates all characterized bacterialtargeting t6ss proteins act as toxins either by killing or preventing the growth of target cells the mechanisms of toxicity toward target cells exhibited by t6ss substrates are diverse but typically involve targeting of highly conserved bacterial structures including degradation of the cell wall through amidase or glycohydrolase activity disruption of cell membranes through lipase activity or pore formation cleavage of dna and degradation of the essential metabolite nad t6sspositive bacterial species prevent t6ssmediated intoxication towards self and kin cells by producing immunity proteins specific to each secreted toxin the immunity proteins function by binding to the toxin proteins often at their active site thereby blocking their activity some research has gone into regulation of t6ss by two component systems in p aeruginosa it has been observed that the gacsrsm twocomponent system is involved in type vi secretion system regulation this system regulates the expression of rsm small regulatory rna molecules and has also been implicated in biofilm formation upon the gacsrsm pathway stimulation an increase in rsm molecules leads to inhibition of mrnabinding protein rsma rsma is a translational inhibitor that binds to sequences near the ribosomebinding site for t6ss gene expression this level of regulation has also been observed in p fluorescens and p syringae there are various examples in which quorum sensing regulates t6ss in vibrio cholerae t6ss studies it has been observed that serotype o37 has high vas gene expression serotypes o139 and o1 on the other hand exhibit the opposite with markedly low vas gene expression it has been suggested that the differences in expression are attributable to differences in'</li></ul> |
| 8 | <ul><li>'in radio communication and avionics a conformal antenna or conformal array is a flat array antenna which is designed to conform or follow some prescribed shape for example a flat curving antenna which is mounted on or embedded in a curved surface it consists of multiple individual antennas mounted on or in the curved surface which work together as a single antenna to transmit or receive radio waves conformal antennas were developed in the 1980s as avionics antennas integrated into the curving skin of military aircraft to reduce aerodynamic drag replacing conventional antenna designs which project from the aircraft surface military aircraft and missiles are the largest application of conformal antennas but they are also used in some civilian aircraft military ships and land vehicles as the cost of the required processing technology comes down they are being considered for use in civilian applications such as train antennas car radio antennas and cellular base station antennas to save space and also to make the antenna less visually intrusive by integrating it into existing objects conformal antennas are a form of phased array antenna they are composed of an array of many identical small flat antenna elements such as dipole horn or patch antennas covering the surface at each antenna the current from the transmitter passes through a phase shifter device which are all controlled by a microprocessor computer by controlling the phase of the feed current the nondirectional radio waves emitted by the individual antennas can be made to combine in front of the antenna by the process of interference forming a strong beam or beams of radio waves pointed in any desired direction in a receiving antenna the weak individual radio signals received by each antenna element are combined in the correct phase to enhance signals coming from a particular direction so the antenna can be made sensitive to the signal from a particular station and reject interfering signals from other directions in a conventional phased array the individual antenna elements are mounted on a flat surface in a conformal antenna they are mounted on a curved surface and the phase shifters also compensate for the different phase shifts caused by the varying path lengths of the radio waves due to the location of the individual antennas on the curved surface because the individual antenna elements must be small conformal arrays are typically limited to high frequencies in the uhf or microwave range where the wavelength of the waves is small enough that small antennas can be used'</li><li>'autopilot are tightly controlled and extensive test procedures are put in place some autopilots also use design diversity in this safety feature critical software processes will not only run on separate computers and possibly even using different architectures but each computer will run software created by different engineering teams often being programmed in different programming languages it is generally considered unlikely that different engineering teams will make the same mistakes as the software becomes more expensive and complex design diversity is becoming less common because fewer engineering companies can afford it the flight control computers on the space shuttle used this design there were five computers four of which redundantly ran identical software and a fifth backup running software that was developed independently the software on the fifth system provided only the basic functions needed to fly the shuttle further reducing any possible commonality with the software running on the four primary systems a stability augmentation system sas is another type of automatic flight control system however instead of maintaining the aircraft required altitude or flight path the sas will move the aircraft control surfaces to damp unacceptable motions sas automatically stabilizes the aircraft in one or more axes the most common type of sas is the yaw damper which is used to reduce the dutch roll tendency of sweptwing aircraft some yaw dampers are part of the autopilot system while others are standalone systemsyaw dampers use a sensor to detect how fast the aircraft is rotating either a gyroscope or a pair of accelerometers a computeramplifier and an actuator the sensor detects when the aircraft begins the yawing part of dutch roll a computer processes the signal from the sensor to determine the rudder deflection required to damp the motion the computer tells the actuator to move the rudder in the opposite direction to the motion since the rudder has to oppose the motion to reduce it the dutch roll is damped and the aircraft becomes stable about the yaw axis because dutch roll is an instability that is inherent in all sweptwing aircraft most sweptwing aircraft need some sort of yaw damper there are two types of yaw damper the series yaw damper and the parallel yaw damper the actuator of a parallel yaw damper will move the rudder independently of the pilots rudder pedals while the actuator of a series yaw damper is clutched to the rudder control quadrant and will result in pedal movement when the rudder moves some aircraft have stability augmentation systems that will stabilize the aircraft in more than a single axis the boeing b52 for example requires both pitch and yaw sas in order to provide a stable bombing'</li><li>'airground radiotelephone service is a system which allows voice calls and other communication services to be made from an aircraft to either a satellite or land based network the service operates via a transceiver mounted in the aircraft on designated frequencies in the us these frequencies have been allocated by the federal communications commission the system is used in both commercial and general aviation services licensees may offer a wide range of telecommunications services to passengers and others on aircraft a us airground radiotelephone transmits a radio signal in the 849 to 851 megahertz range this signal is sent to either a receiving ground station or a communications satellite depending on the design of the particular system commercial aviation airground radiotelephone service licensees operate in the 800 mhz band and can provide communication services to all aviation markets including commercial governmental and private aircraft if it is a call from a commercial airline passenger radiotelephone the call is then forwarded to a verification center to process credit card or calling card information the verification center will then route the call to the public switched telephone network which completes the call for the return signal ground stations and satellites use a radio signal in the 894 to 896 megahertz range two separate frequency bands have been allocated by the fcc for airground telephone service one at 454459 mhz was originally reserved for general aviation use nonairliners and the 800 mhz range primarily used for airliner telephone service which has shown limited acceptance by passengers att corporation abandoned its 800 mhz airground offering in 2005 and verizon airfone formerly gte airfone is scheduled for decommissioning in late 2008 although the fcc has reauctioned verizons spectrum see below skytel now defunct which had the third nationwide 800 mhz license elected not to build it but continued to operate in the 450 mhz agras system its agras license and operating network was sold to bell industries in april 2007 the 450 mhz general aviation network is administered by midamerica computer corporation in blair nebraska which has called the service agras and requires the use of instruments manufactured by terra and chelton aviationwulfsberg electronics and marketed as the flitephone vi series general aviation airground radiotelephone service licensees operate in the 450 mhz band and can provide a variety of telecommunications services to private aircraft such as small single engine planes and corporate jetsin the 800 mhz band the fcc defined 10 blocks of paired uplinkdownlink narrowband ranges 6 khz and six control ranges 32 khz six carriers were licensed to offer inflight telephony each being granted nonex'</li></ul> |
| 25 | <ul><li>'given a finite number of vectors x 1 x 2 … x n displaystyle x1x2dots xn in a real vector space a conical combination conical sum or weighted sum of these vectors is a vector of the form α 1 x 1 α 2 x 2 [UNK] α n x n displaystyle alpha 1x1alpha 2x2cdots alpha nxn where α i displaystyle alpha i are nonnegative real numbers the name derives from the fact that the set of all conical sum of vectors defines a cone possibly in a lowerdimensional subspace the set of all conical combinations for a given set s is called the conical hull of s and denoted cones or conis that is coni s [UNK] i 1 k α i x i x i ∈ s α i ∈ r ≥ 0 k ∈ n displaystyle operatorname coni sleftsum i1kalpha ixixiin salpha iin mathbb r geq 0kin mathbb n right by taking k 0 it follows the zero vector origin belongs to all conical hulls since the summation becomes an empty sum the conical hull of a set s is a convex set in fact it is the intersection of all convex cones containing s plus the origin if s is a compact set in particular when it is a finite nonempty set of points then the condition plus the origin is unnecessary if we discard the origin we can divide all coefficients by their sum to see that a conical combination is a convex combination scaled by a positive factor therefore conical combinations and conical hulls are in fact convex conical combinations and convex conical hulls respectively moreover the above remark about dividing the coefficients while discarding the origin implies that the conical combinations and hulls may be considered as convex combinations and convex hulls in the projective space while the convex hull of a compact set is also a compact set this is not so for the conical hull first of all the latter one is unbounded moreover it is not even necessarily a closed set a counterexample is a sphere passing through the origin with the conical hull being an open halfspace plus the origin however if s is a nonempty convex compact set which does not contain the origin then the convex conical hull of s is a closed set affine combination convex combination linear combination'</li><li>'f a displaystyle leftsum delta frightanhfanhfa fundamental theorem of calculus ii δ [UNK] g g displaystyle delta leftsum grightg the definitions are applied to graphs as follows if a function a 0 displaystyle 0 cochain f displaystyle f is defined at the nodes of a graph a b c … displaystyle abcldots then its exterior derivative or the differential is the difference ie the following function defined on the edges of the graph 1 displaystyle 1 cochain d f a b f b − f a displaystyle leftdfrightbig abbig fbfa if g displaystyle g is a 1 displaystyle 1 cochain then its integral over a sequence of edges σ displaystyle sigma of the graph is the sum of its values over all edges of σ displaystyle sigma path integral [UNK] σ g [UNK] σ g a b displaystyle int sigma gsum sigma gbig abbig these are the properties constant rule if c displaystyle c is a constant then d c 0 displaystyle dc0 linearity if a displaystyle a and b displaystyle b are constants d a f b g a d f b d g [UNK] σ a f b g a [UNK] σ f b [UNK] σ g displaystyle dafbgadfbdgquad int sigma afbgaint sigma fbint sigma g product rule d f g f d g g d f d f d g displaystyle dfgfdggdfdfdg fundamental theorem of calculus i if a 1 displaystyle 1 chain σ displaystyle sigma consists of the edges a 0 a 1 a 1 a 2 a n − 1 a n displaystyle a0a1a1a2an1an then for any 0 displaystyle 0 cochain f displaystyle f [UNK] σ d f f a n − f a 0 displaystyle int sigma dffanfa0 fundamental theorem of calculus ii if the graph is a tree g displaystyle g is a 1 displaystyle 1 cochain and a function 0 displaystyle 0 cochain is defined on the nodes of the graph by f x [UNK] σ g displaystyle fxint sigma g where a 1 displaystyle 1 chain σ displaystyle sigma consists of a 0 a 1 a 1 a 2 a n − 1 x displaystyle a0a1a1a2an1x for some fixed a 0 displaystyle a0 then d f g displaystyle dfg see references a simplicial complex s displaystyle s is a set of simplices that satisfies the following conditions 1 every face of'</li><li>'##2 xn of n real variables can be considered as a function on rn that is with rn as its domain the use of the real nspace instead of several variables considered separately can simplify notation and suggest reasonable definitions consider for n 2 a function composition of the following form where functions g1 and g2 are continuous if [UNK] ∈ r fx1 · is continuous by x2 [UNK] ∈ r f · x2 is continuous by x1then f is not necessarily continuous continuity is a stronger condition the continuity of f in the natural r2 topology discussed below also called multivariable continuity which is sufficient for continuity of the composition f the coordinate space rn forms an ndimensional vector space over the field of real numbers with the addition of the structure of linearity and is often still denoted rn the operations on rn as a vector space are typically defined by the zero vector is given by and the additive inverse of the vector x is given by this structure is important because any ndimensional real vector space is isomorphic to the vector space rn in standard matrix notation each element of rn is typically written as a column vector and sometimes as a row vector the coordinate space rn may then be interpreted as the space of all n × 1 column vectors or all 1 × n row vectors with the ordinary matrix operations of addition and scalar multiplication linear transformations from rn to rm may then be written as m × n matrices which act on the elements of rn via left multiplication when the elements of rn are column vectors and on elements of rm via right multiplication when they are row vectors the formula for left multiplication a special case of matrix multiplication is any linear transformation is a continuous function see below also a matrix defines an open map from rn to rm if and only if the rank of the matrix equals to m the coordinate space rn comes with a standard basis to see that this is a basis note that an arbitrary vector in rn can be written uniquely in the form the fact that real numbers unlike many other fields constitute an ordered field yields an orientation structure on rn any fullrank linear map of rn to itself either preserves or reverses orientation of the space depending on the sign of the determinant of its matrix if one permutes coordinates or in other words elements of the basis the resulting orientation will depend on the parity of the permutation diffeomorphisms of rn or domains in it by their virtue to avoid zero jacobian are also classified to orientationpreserving and orientationreversing it has important consequences for the theory of differential forms whose applications include electrodynamics'</li></ul> |
| 34 | <ul><li>'tethered to state and corporatesponsored science and social studies standards or fails to articulate the political necessity for widespread understanding of the unsustainable nature of modern lifestyles however ecopedagogy has tried to utilize the ongoing united nations decade of educational for sustainable development 2005 – 2015 to make strategic interventions on behalf of the oppressed using it as an opportunity to unpack and clarify the concept of sustainable development ecopedagogy scholar richard kahn describes the three main goals of the ecopedagogy movement to be creating opportunities for the proliferation of ecoliteracy programs both within schools and society bridging the gap of praxis between scholars and the public especially activists on ecopedagogical interests instigating dialogue and selfreflective solidarity across the many groups among educational left particularly in light of the existing planetary crisis angela antunes and moacir gadotti 2005 writeecopedagogy is not just another pedagogy among many other pedagogies it not only has meaning as an alternative project concerned with nature preservation natural ecology and the impact made by human societies on the natural environment social ecology but also as a new model for sustainable civilization from the ecological point of view integral ecology which implies making changes on economic social and cultural structuresaccording to social movement theorists ron ayerman and andrew jamison there are three broad dimensions of environmentally related movements cosmological technological and organizational in ecopedagogy these dimensions are outlined by richard kahn 2010 as the following the cosmological dimension focuses on how ecoliteracy ie understanding the natural systems that sustain life can transform people ’ s worldviews for example assumptions about society ’ s having the right to exploit nature can be transformed into understanding of the need for ecological balance to support society in the long term the success of such ‘ cosmological ’ thinking transformations can be assessed by the degree to which such paradigm shifts are adopted by the public the technological dimension is twofold critiquing the set of polluting technologies that have contributed to traditional development as well as some which are used or misused under the pretext of sustainable development and promoting clean technologies that do not interfere with ecological and social balance the organizational dimension emphasizes that knowledge should be of and for the people thus academics should be in dialogue with public discourse and social movements ecopedagogy is not the collection of theories or practices developed by any particular set of individuals rather akin to the world social forum and other related forms of contemporary popular education strategies it is a worldwide association of critical educators theorists nongovernmental and governmental'</li><li>'marshall college dr moog has used pogil materials in his teaching since 1994 and is a coauthor of pogil materials for both general and physical chemistry'</li><li>'##mans book is informed by an advanced theoretical knowledge of scholarly research documents and their composition for example chapter 6 is about recognizing the many voices in a text the practical advises given are based on textual theory mikhail bakhtin and julia kristeva chapter 8 is titled evaluating the book as a whole the book review and the first heading is books as tools basically critical reading is related to epistemological issues hermeneutics eg the version developed by hansgeorg gadamer has demonstrated that the way we read and interpret texts is dependent on our preunderstanding and prejudices human knowledge is always an interpretative clarification of the world not a pure interestfree theory hermeneutics may thus be understood as a theory about critical reading this field was until recently associated with the humanities not with science this situation changed when thomas samuel kuhn published his book 1962 the structure of scientific revolutions which can be seen as an hermeneutic interpretation of the sciences because it conceives the scientists as governed by assumptions which are historically embedded and linguistically mediated activities organized around paradigms that direct the conceptualization and investigation of their studies scientific revolutions imply that one paradigm replaces another and introduces a new set of theories approaches and definitions according to mallery hurwitz duffy 1992 the notion of a paradigmcentered scientific community is analogous to gadamers notion of a linguistically encoded social tradition in this way hermeneutics challenge the positivist view that science can cumulate objective facts observations are always made on the background of theoretical assumptions they are theory dependent by conclusion is critical reading not just something that any scholar is able to do the way we read is partly determined by the intellectual traditions which have formed our beliefs and thinking generally we read papers within our own culture or tradition less critically compared to our reading of papers from other traditions or paradigms the psychologist cyril burt is known for his studies on the effect of heredity on intelligence shortly after he died his studies of inheritance and intelligence came into disrepute after evidence emerged indicating he had falsified research data a 1994 paper by william h tucker is illuminative on both how critical reading was performed in the discovery of the falsified data as well as in many famous psychologists noncritical reading of burts papers tucker shows that the recognized experts within the field of intelligence research blindly accepted cyril burts research even though it was without scientific value and probably directly faked they wanted to believe that iq is hereditary and considered uncritically empirical claims supporting this view this paper thus demonstrates how critical reading and the opposite'</li></ul> |
| 23 | <ul><li>'in biochemistry immunostaining is any use of an antibodybased method to detect a specific protein in a sample the term immunostaining was originally used to refer to the immunohistochemical staining of tissue sections as first described by albert coons in 1941 however immunostaining now encompasses a broad range of techniques used in histology cell biology and molecular biology that use antibodybased staining methods immunohistochemistry or ihc staining of tissue sections or immunocytochemistry which is the staining of cells is perhaps the most commonly applied immunostaining technique while the first cases of ihc staining used fluorescent dyes see immunofluorescence other nonfluorescent methods using enzymes such as peroxidase see immunoperoxidase staining and alkaline phosphatase are now used these enzymes are capable of catalysing reactions that give a coloured product that is easily detectable by light microscopy alternatively radioactive elements can be used as labels and the immunoreaction can be visualized by autoradiographytissue preparation or fixation is essential for the preservation of cell morphology and tissue architecture inappropriate or prolonged fixation may significantly diminish the antibody binding capability many antigens can be successfully demonstrated in formalinfixed paraffinembedded tissue sections however some antigens will not survive even moderate amounts of aldehyde fixation under these conditions tissues should be rapidly fresh frozen in liquid nitrogen and cut with a cryostat the disadvantages of frozen sections include poor morphology poor resolution at higher magnifications difficulty in cutting over paraffin sections and the need for frozen storage alternatively vibratome sections do not require the tissue to be processed through organic solvents or high heat which can destroy the antigenicity or disrupted by freeze thawing the disadvantage of vibratome sections is that the sectioning process is slow and difficult with soft and poorly fixed tissues and that chatter marks or vibratome lines are often apparent in the sectionsthe detection of many antigens can be dramatically improved by antigen retrieval methods that act by breaking some of the protein crosslinks formed by fixation to uncover hidden antigenic sites this can be accomplished by heating for varying lengths of times heat induced epitope retrieval or hier or using enzyme digestion proteolytic induced epitope retrieval or pierone of the main difficulties with ihc staining is overcoming specific or nonspecific background optimisation of fixation methods and times pre'</li><li>'the strategic advisory group of experts sage is the principal advisory group to world health organization who for vaccines and immunization established in 1999 through the merging of two previous committees notably the scientific advisory group of experts which served the program for vaccine development and the global advisory group which served the epi program by directorgeneral of the who gro harlem brundtland it is charged with advising who on overall global policies and strategies ranging from vaccines and biotechnology research and development to delivery of immunization and its linkages with other health interventions sage is concerned not just with childhood vaccines and immunization but all vaccinepreventable diseases sage provide global recommendations on immunization policy and such recommendations will be further translated by advisory committee at the country level the sage has 15 members who are recruited and selected as acknowledged experts from around the world in the fields of epidemiology public health vaccinology paediatrics internal medicine infectious diseases immunology drug regulation programme management immunization delivery healthcare administration health economics and vaccine safety members are appointed by directorgeneral of the who to serve an initial term of 3 years and can only be renewed once sage meets at least twice annually in april and november with working groups established for detailed review of specific topics prior to discussion by the full group priorities of work and meeting agendas are developed by the group in consultation with whounicef the secretariat of the gavi alliance and who regional offices participate as observers in sage meetings and deliberations who also invites other observers to sage meetings including representatives from who regional technical advisory groups nongovernmental organizations international professional organizations technical agencies donor organizations and associations of manufacturers of vaccines and immunization technologies additional experts may be invited as appropriate to further contribute to specific agenda itemsas of december 2022 working groups were established for the following vaccines covid19 dengue ebola hpv meningococcal vaccines and vaccination pneumococcal vaccines polio vaccine programme advisory group pag for the malaria vaccine implementation programme smallpox and monkeypox vaccines national immunization technical advisory group countrylevel advisory committee'</li><li>'rates or body cells that are dying which subsequently cause physiological problems are generally not specifically targeted by the immune system since tumor cells are the patients own cells tumor cells however are highly abnormal and many display unusual antigens some such tumor antigens are inappropriate for the cell type or its environment monoclonal antibodies can target tumor cells or abnormal cells in the body that are recognized as body cells but are debilitating to ones health immunotherapy developed in the 1970s following the discovery of the structure of antibodies and the development of hybridoma technology which provided the first reliable source of monoclonal antibodies these advances allowed for the specific targeting of tumors both in vitro and in vivo initial research on malignant neoplasms found mab therapy of limited and generally shortlived success with blood malignancies treatment also had to be tailored to each individual patient which was impracticable in routine clinical settingsfour major antibody types that have been developed are murine chimeric humanised and human antibodies of each type are distinguished by suffixes on their name initial therapeutic antibodies were murine analogues suffix omab these antibodies have a short halflife in vivo due to immune complex formation limited penetration into tumour sites and inadequately recruit host effector functions chimeric and humanized antibodies have generally replaced them in therapeutic antibody applications understanding of proteomics has proven essential in identifying novel tumour targetsinitially murine antibodies were obtained by hybridoma technology for which jerne kohler and milstein received a nobel prize however the dissimilarity between murine and human immune systems led to the clinical failure of these antibodies except in some specific circumstances major problems associated with murine antibodies included reduced stimulation of cytotoxicity and the formation of complexes after repeated administration which resulted in mild allergic reactions and sometimes anaphylactic shock hybridoma technology has been replaced by recombinant dna technology transgenic mice and phage display to reduce murine antibody immunogenicity attacks by the immune system against the antibody murine molecules were engineered to remove immunogenic content and to increase immunologic efficiency this was initially achieved by the production of chimeric suffix ximab and humanized antibodies suffix zumab chimeric antibodies are composed of murine variable regions fused onto human constant regions taking human gene sequences from the kappa light chain and the igg1 heavy chain results in antibodies that are approximately 65 human this reduces immunogenicity and thus increases serum halflifehumanised antibodies are produced by grafting murine hypervariable regions on amino acid domains'</li></ul> |
| 12 | <ul><li>'of integers rational numbers algebraic numbers real numbers or complex numbers s 0 s 1 s 2 s 3 … displaystyle s0s1s2s3ldots written as s n n 0 ∞ displaystyle snn0infty as a shorthand satisfying a formula of the form for all n ≥ d displaystyle ngeq d where c i displaystyle ci are constants this equation is called a linear recurrence with constant coefficients of order d the order of the constantrecursive sequence is the smallest d ≥ 1 displaystyle dgeq 1 such that the sequence satisfies a formula of the above form or d 0 displaystyle d0 for the everywherezero sequence the d coefficients c 1 c 2 … c d displaystyle c1c2dots cd must be coefficients ranging over the same domain as the sequence integers rational numbers algebraic numbers real numbers or complex numbers for example for a rational constantrecursive sequence s i displaystyle si and c i displaystyle ci must be rational numbers the definition above allows eventuallyperiodic sequences such as 1 0 0 0 … displaystyle 1000ldots and 0 1 0 0 … displaystyle 0100ldots some authors require that c d = 0 displaystyle cdneq 0 which excludes such sequences the sequence 0 1 1 2 3 5 8 13 of fibonacci numbers is constantrecursive of order 2 because it satisfies the recurrence f n f n − 1 f n − 2 displaystyle fnfn1fn2 with f 0 0 f 1 1 displaystyle f00f11 for example f 2 f 1 f 0 1 0 1 displaystyle f2f1f0101 and f 6 f 5 f 4 5 3 8 displaystyle f6f5f4538 the sequence 2 1 3 4 7 11 of lucas numbers satisfies the same recurrence as the fibonacci sequence but with initial conditions l 0 2 displaystyle l02 and l 1 1 displaystyle l11 more generally every lucas sequence is constantrecursive of order 2 for any a displaystyle a and any r = 0 displaystyle rneq 0 the arithmetic progression a a r a 2 r … displaystyle aara2rldots is constantrecursive of order 2 because it satisfies s n 2 s n − 1 − s n − 2 displaystyle sn2sn1sn2 generalizing this see polynomial sequences below for any a = 0 displaystyle aneq 0'</li><li>'##widehat qshgeq varepsilon 2 where r displaystyle r and s displaystyle s are iid samples of size m displaystyle m drawn according to the distribution p displaystyle p one can view r displaystyle r as the original randomly drawn sample of length m displaystyle m while s displaystyle s may be thought as the testing sample which is used to estimate q p h displaystyle qph permutation since r displaystyle r and s displaystyle s are picked identically and independently so swapping elements between them will not change the probability distribution on r displaystyle r and s displaystyle s so we will try to bound the probability of q r h − q s h ≥ ε 2 displaystyle widehat qrhwidehat qshgeq varepsilon 2 for some h ∈ h displaystyle hin h by considering the effect of a specific collection of permutations of the joint sample x r s displaystyle xrs specifically we consider permutations σ x displaystyle sigma x which swap x i displaystyle xi and x m i displaystyle xmi in some subset of 1 2 m displaystyle 12m the symbol r s displaystyle rs means the concatenation of r displaystyle r and s displaystyle s reduction to a finite class we can now restrict the function class h displaystyle h to a fixed joint sample and hence if h displaystyle h has finite vc dimension it reduces to the problem to one involving a finite function classwe present the technical details of the proof lemma let v x ∈ x m q p h − q x h ≥ ε for some h ∈ h displaystyle vxin xmqphwidehat qxhgeq varepsilon text for some hin h and r r s ∈ x m × x m q r h − q s h ≥ ε 2 for some h ∈ h displaystyle rrsin xmtimes xmwidehat qrhwidehat qshgeq varepsilon 2text for some hin h then for m ≥ 2 ε 2 displaystyle mgeq frac 2varepsilon 2 p m v ≤ 2 p 2 m r displaystyle pmvleq 2p2mr proof by the triangle inequality if q p h − q r h ≥ ε displaystyle qphwidehat qrhgeq varepsilon and q p h − q s h ≤ ε 2 displaystyle qphwidehat qshleq varepsilon 2 then q r h − q s h ≥'</li><li>'x nonempty subsets or counting equivalence relations on n with exactly x classes indeed for any surjective function f n → x the relation of having the same image under f is such an equivalence relation and it does not change when a permutation of x is subsequently applied conversely one can turn such an equivalence relation into a surjective function by assigning the elements of x in some manner to the x equivalence classes the number of such partitions or equivalence relations is by definition the stirling number of the second kind snx also written n x displaystyle textstyle n atop x its value can be described using a recursion relation or using generating functions but unlike binomial coefficients there is no closed formula for these numbers that does not involve a summation surjective functions from n to x for each surjective function f n → x its orbit under permutations of x has x elements since composition on the left with two distinct permutations of x never gives the same function on n the permutations must differ at some element of x which can always be written as fi for some i ∈ n and the compositions will then differ at i it follows that the number for this case is x times the number for the previous case that is x n x displaystyle textstyle xn atop x example x a b n 1 2 3 then displaystyle xabn123text then a a b a b a a b b b a a b a b b b a 2 3 2 2 × 3 6 displaystyle leftvert aababaabbbaababbbarightvert 2left3 atop 2right2times 36 functions from n to x up to a permutation of x this case is like the corresponding one for surjective functions but some elements of x might not correspond to any equivalence class at all since one considers functions up to a permutation of x it does not matter which elements are concerned just how many as a consequence one is counting equivalence relations on n with at most x classes and the result is obtained from the mentioned case by summation over values up to x giving [UNK] k 0 x n k displaystyle textstyle sum k0xn atop k in case x ≥ n the size of x poses no restriction at all and one is counting all equivalence relations on a set of n elements equivalently all partitions of such a set therefore [UNK] k 0 n n k displaystyle textstyle sum k0nn atop k gives an expression for the bell number bn surjective functions from n to x'</li></ul> |
| 31 | <ul><li>'are real but the future is not until einsteins reinterpretation of the physical concepts associated with time and space in 1907 time was considered to be the same everywhere in the universe with all observers measuring the same time interval for any event nonrelativistic classical mechanics is based on this newtonian idea of time einstein in his special theory of relativity postulated the constancy and finiteness of the speed of light for all observers he showed that this postulate together with a reasonable definition for what it means for two events to be simultaneous requires that distances appear compressed and time intervals appear lengthened for events associated with objects in motion relative to an inertial observer the theory of special relativity finds a convenient formulation in minkowski spacetime a mathematical structure that combines three dimensions of space with a single dimension of time in this formalism distances in space can be measured by how long light takes to travel that distance eg a lightyear is a measure of distance and a meter is now defined in terms of how far light travels in a certain amount of time two events in minkowski spacetime are separated by an invariant interval which can be either spacelike lightlike or timelike events that have a timelike separation cannot be simultaneous in any frame of reference there must be a temporal component and possibly a spatial one to their separation events that have a spacelike separation will be simultaneous in some frame of reference and there is no frame of reference in which they do not have a spatial separation different observers may calculate different distances and different time intervals between two events but the invariant interval between the events is independent of the observer and his or her velocity unlike space where an object can travel in the opposite directions and in 3 dimensions time appears to have only one dimension and only one direction – the past lies behind fixed and immutable while the future lies ahead and is not necessarily fixed yet most laws of physics allow any process to proceed both forward and in reverse there are only a few physical phenomena that violate the reversibility of time this time directionality is known as the arrow of time acknowledged examples of the arrow of time are radiative arrow of time manifested in waves eg light and sound travelling only expanding rather than focusing in time see light cone entropic arrow of time according to the second law of thermodynamics an isolated system evolves toward a larger disorder rather than orders spontaneously quantum arrow time which is related to irreversibility of measurement in quantum mechanics according to the copenhagen interpretation of quantum mechanics weak arrow of time preference for a certain time direction of weak force in'</li><li>'presented is as easy to understand as possible although illuminating a branch of mathematics is the purpose of textbooks rather than the mathematical theory they might be written to cover a theory can be either descriptive as in science or prescriptive normative as in philosophy the latter are those whose subject matter consists not of empirical data but rather of ideas at least some of the elementary theorems of a philosophical theory are statements whose truth cannot necessarily be scientifically tested through empirical observation a field of study is sometimes named a theory because its basis is some initial set of assumptions describing the fields approach to the subject these assumptions are the elementary theorems of the particular theory and can be thought of as the axioms of that field some commonly known examples include set theory and number theory however literary theory critical theory and music theory are also of the same form one form of philosophical theory is a metatheory or metatheory a metatheory is a theory whose subject matter is some other theory or set of theories in other words it is a theory about theories statements made in the metatheory about the theory are called metatheorems a political theory is an ethical theory about the law and government often the term political theory refers to a general view or specific ethic political belief or attitude thought about politics in social science jurisprudence is the philosophical theory of law contemporary philosophy of law addresses problems internal to law and legal systems and problems of law as a particular social institution most of the following are scientific theories some are not but rather encompass a body of knowledge or art such as music theory and visual arts theories anthropology carneiros circumscription theory astronomy alpher – bethe – gamow theory — b2fh theory — copernican theory — newtons theory of gravitation — hubbles law — keplers laws of planetary motion ptolemaic theory biology cell theory — chemiosmotic theory — evolution — germ theory — symbiogenesis chemistry molecular theory — kinetic theory of gases — molecular orbital theory — valence bond theory — transition state theory — rrkm theory — chemical graph theory — flory – huggins solution theory — marcus theory — lewis theory successor to brønsted – lowry acid – base theory — hsab theory — debye – huckel theory — thermodynamic theory of polymer elasticity — reptation theory — polymer field theory — møller – plesset perturbation theory — density functional theory — frontier molecular orbital theory — polyhedral skeletal electron pair theory — baeyer strain theory — quantum theory of'</li><li>'largely agreed with parmenidess reasoning on nothing aristotle differs with parmenidess conception of nothing and says although these opinions seem to follow logically in a dialectical discussion yet to believe them seems next door to madness when one considers the factsin modern times albert einsteins concept of spacetime has led many scientists including einstein himself to adopt a position remarkably similar to parmenides on the death of his friend michele besso einstein consoled his widow with the words now he has departed from this strange world a little ahead of me that signifies nothing for those of us that believe in physics the distinction between past present and future is only a stubbornly persistent illusion leucippus leucippus early 5th century bc one of the atomists along with other philosophers of his time made attempts to reconcile this monism with the everyday observation of motion and change he accepted the monist position that there could be no motion without a void the void is the opposite of being it is notbeing on the other hand there exists something known as an absolute plenum a space filled with matter and there can be no motion in a plenum because it is completely full but there is not just one monolithic plenum for existence consists of a multiplicity of plenums these are the invisibly small atoms of greek atomist theory later expanded by democritus c 460 – 370 bc which allows the void to exist between them in this scenario macroscopic objects can comeintobeing move through space and pass into notbeing by means of the coming together and moving apart of their constituent atoms the void must exist to allow this to happen or else the frozen world of parmenides must be accepted bertrand russell points out that this does not exactly defeat the argument of parmenides but rather ignores it by taking the rather modern scientific position of starting with the observed data motion etc and constructing a theory based on the data as opposed to parmenides attempts to work from pure logic russell also observes that both sides were mistaken in believing that there can be no motion in a plenum but arguably motion cannot start in a plenum cyril bailey notes that leucippus is the first to say that a thing the void might be real without being a body and points out the irony that this comes from a materialistic atomist leucippus is therefore the first to say that nothing has a reality attached to it aristotle newton descartes aristotle 384 – 322 bc provided the classic escape from the logical problem posed by parmenides by distinguishing things that'</li></ul> |
| 38 | <ul><li>'in sociolinguistics prestige is the level of regard normally accorded a specific language or dialect within a speech community relative to other languages or dialects prestige varieties are language or dialect families which are generally considered by a society to be the most correct or otherwise superior in many cases they are the standard form of the language though there are exceptions particularly in situations of covert prestige where a nonstandard dialect is highly valued in addition to dialects and languages prestige is also applied to smaller linguistic features such as the pronunciation or usage of words or grammatical constructs which may not be distinctive enough to constitute a separate dialect the concept of prestige provides one explanation for the phenomenon of variation in form among speakers of a language or languagesthe presence of prestige dialects is a result of the relationship between the prestige of a group of people and the language that they use generally the language or variety that is regarded as more prestigious in that community is the one used by the more prestigious group the level of prestige a group has can also influence whether the language that they speak is considered its own language or a dialect implying that it does not have enough prestige to be considered its own language social class has a correlation with the language that is considered more prestigious and studies in different communities have shown that sometimes members of a lower social class attempt to emulate the language of individuals in higher social classes to avoid how their distinct language would otherwise construct their identity the relationship between language and identity construction as a result of prestige influences the language used by different individuals depending on which groups they do belong or want to belong sociolinguistic prestige is especially visible in situations where two or more distinct languages are used and in diverse socially stratified urban areas in which there are likely to be speakers of different languages andor dialects interacting often the result of language contact depends on the power relationship between the languages of the groups that are in contact the prevailing view among contemporary linguists is that regardless of perceptions that a dialect or language is better or worse than its counterparts when dialects and languages are assessed on purely linguistic grounds all languages — and all dialects — have equal meritadditionally which varieties registers or features will be considered more prestigious depends on audience and context there are thus the concepts of overt and covert prestige overt prestige is related to standard and formal language features and expresses power and status covert prestige is related more to vernacular and often patois and expresses solidarity community and group identity more than authority prestige varieties are those that are regarded mostly highly within a society as such the standard language the form promoted by authorities — usually governmental or from those in power — and considered'</li><li>'english elements engaged in the codeswitching process are mostly of one or two words in length and are usually content words that can fit into the surrounding cantonese phrase fairly easily like nouns verbs adjectives and occasionally adverbs examples include [UNK] canteen 食 [UNK] heoi3 ken6tin1 sik6 faan6 go to the canteen for lunch [UNK] [UNK] [UNK] press [UNK] hou2 do1 je5 pet1 si4 nei5 a lot of things press you 我 [UNK] sure ngo5 m4 su1aa4 im not sure [UNK] 我 check 一 check [UNK] bong1 ngo5 cek1 jat1 cek1 aa1 help me searchcheck for itmeanwhile structure words like determiners conjunctions and auxiliary verbs almost never appear alone in the predominantly cantonese discourse which explains the ungrammaticality of two [UNK] does not make sense but literally means two parts english lexical items on the other hand are frequently assimilated into cantonese grammar for instance [UNK] part loeng5 paat1 two parts part would lose its plural morpheme s as do its counterpart in cantonese equip [UNK] ji6 kwip1 zo2 equipped equip is followed by a cantonese perfective aspect marker a more evident case of the syntactic assimilation would be where a negation marker is inserted into an english compound adjective or verb to form yes – no questions in cantonese [UNK] [UNK] [UNK] [UNK] 愛 [UNK] ? keoi5 ho2 m4 ho2 oi3 aa3 is shehe lovely is pure cantonese while a sentence like [UNK] cu [UNK] cute [UNK] ? keoi5 kiu1 m4 cute aa3 is heshe cute is a typical example of the assimilationfor english elements consisting of two words or more they generally retain english grammar internally without disrupting the surrounding cantonese grammar for example [UNK] [UNK] [UNK] [UNK] parttime job [UNK] m5 sai2 zoi3 wan2 paat1 taam1 zop1 laa3 you dont need to look for a parttime job againexamples are taken from the same source the first major framework dichotomises motivations of codeswitching in hong kong into expedient mixing and orientational mixing for expedient mixing the speaker would turn to english eg form if the correspondent low cantonese expression is not available and the existing high cantonese expression eg [UNK] [UNK] biu2 gaak3 sounds too formal in the case of orientational mixing despite the presence of both high and low expression eg for barbecue there exists both [UNK] [UNK] siu1'</li><li>'the participants with less dominant participants generally being more attentive to more dominant participants ’ words an opposition between urban and suburban linguistic variables is common to all metropolitan regions of the united states although the particular variables distinguishing urban and suburban styles may differ from place to place the trend is for urban styles to lead in the use of nonstandard forms and negative concord in penny eckerts study of belten high in the detroit suburbs she noted a stylistic difference between two groups that she identified schooloriented jocks and urbanoriented schoolalienated burnouts the variables she analyzed were the usage of negative concord and the mid and low vowels involved in the northern cities shift which consists of the following changes æ ea a æ ə a ʌ ə ay oy and ɛ ʌ y here is equivalent to the ipa symbol j all of these changes are urbanled as is the use of negative concord the older mostly stabilized changes æ ea a æ and ə a were used the most by women while the newer changes ʌ ə ay oy and ɛ ʌ were used the most by burnouts eckert theorizes that by using an urban variant such as foyt they were not associating themselves with urban youth rather they were trying to index traits that were associated with urban youth such as tough and streetsmart this theory is further supported by evidence from a subgroup within the burnout girls which eckert refers to as ‘ burnedout ’ burnout girls she characterizes this group as being even more antiestablishment than the ‘ regular ’ burnout girls this subgroup led overall in the use of negative concord as well as in femaleled changes this is unusual because negative concord is generally used the most by males ‘ burnedout ’ burnout girls were not indexing masculinity — this is shown by their use of femaleled variants and the fact that they were found to express femininity in nonlinguistic ways this shows that linguistic variables may have different meanings in the context of different styles there is some debate about what makes a style gay in stereotypically flamboyant gay speech the phonemes s and l have a greater duration people are also more likely to identify those with higher frequency ranges as gayon the other hand there are many different styles represented within the gay community there is much linguistic variation in the gay community and each subculture appears to have its own distinct features according to podesva et al gay culture encompasses reified categories such as leather daddies clones drag queens circuit boys guppies gay yuppies gay prostitutes and activists'</li></ul> |
| 6 | <ul><li>'##c vec xi vec xi prime sigma vec xi prime vec xi vec xi prime 2d2xi prime as shown in the diagram on the right the difference between the unlensed angular position β → displaystyle vec beta and the observed position θ → displaystyle vec theta is this deflection angle reduced by a ratio of distances described as the lens equation β → θ → − α → θ → θ → − d d s d s α → d d θ → displaystyle vec beta vec theta vec alpha vec theta vec theta frac ddsdsvec hat alpha vec ddtheta where d d s displaystyle dds is the distance from the lens to the source d s displaystyle ds is the distance from the observer to the source and d d displaystyle dd is the distance from the observer to the lens for extragalactic lenses these must be angular diameter distances in strong gravitational lensing this equation can have multiple solutions because a single source at β → displaystyle vec beta can be lensed into multiple images the reduced deflection angle α → θ → displaystyle vec alpha vec theta can be written as α → θ → 1 π [UNK] d 2 θ ′ θ → − θ → ′ κ θ → ′ θ → − θ → ′ 2 displaystyle vec alpha vec theta frac 1pi int d2theta prime frac vec theta vec theta prime kappa vec theta prime vec theta vec theta prime 2 where we define the convergence κ θ → σ θ → σ c r displaystyle kappa vec theta frac sigma vec theta sigma cr and the critical surface density not to be confused with the critical density of the universe σ c r c 2 d s 4 π g d d s d d displaystyle sigma crfrac c2ds4pi gddsdd we can also define the deflection potential ψ θ → 1 π [UNK] d 2 θ ′ κ θ → ′ ln θ → − θ → ′ displaystyle psi vec theta frac 1pi int d2theta prime kappa vec theta prime ln vec theta vec theta prime such that the scaled deflection angle is just the gradient of the potential and the convergence is half the laplacian of the potential θ → − β → α → θ → ∇ → ψ θ → displaystyle vec theta vec beta vec alpha vec theta vec nabla psi vec theta κ θ → 1 2 ∇ 2 ψ'</li><li>'scattering cils or raman process also exists which is well studied and is in many ways completely analogous to cia and cie cils arises from interactioninduced polarizability increments of molecular complexes the excess polarizability of a complex relative the sum of polarizabilities of the noninteracting molecules molecules interact at close range through intermolecular forces the van der waals forces which cause minute shifts of the electron density distributions relative the distributions of electrons when the molecules are not interacting intermolecular forces are repulsive at near range where electron exchange forces dominate the interaction and attractive at somewhat greater separations where the dispersion forces are active if separations are further increased all intermolecular forces fall off rapidly and may be totally neglected repulsion and attraction are due respectively to the small defects or excesses of electron densities of molecular complexes in the space between the interacting molecules which often result in interactioninduced electric dipole moments that contribute some to interactioninduced emission and absorption intensities the resulting dipoles are referred to as exchange forceinduced dipole and dispersion forceinduced dipoles respectively other dipole induction mechanisms also exist in molecular as opposed to monatomic gases and in mixtures of gases when molecular gases are present molecules have centers of positive charge the nuclei which are surrounded by a cloud of electrons molecules thus may be thought of being surrounded by various electric multipolar fields which will polarize any collisional partner momentarily in a flyby encounter generating the socalled multipoleinduced dipoles in diatomic molecules such as h2 and n2 the lowestorder multipole moment is the quadrupole followed by a hexadecapole etc hence the quadrupoleinduced hexadecapoleinduced dipoles especially the former is often the strongest most significant of the induced dipoles contributing to cia and cie other induced dipole mechanisms exist in collisional systems involving molecules of three or more atoms co2 ch4 collisional frame distortion may be an important induction mechanism collisioninduced emission and absorption by simultaneous collisions of three or more particles generally do involve pairwiseadditive dipole components as well as important irreducible dipole contributions and their spectra collisioninduced absorption was first reported in compressed oxygen gas in 1949 by harry welsch and associates at frequencies of the fundamental band of the o2 molecule note that an unperturbed o2 molecule like all other diatomic homonuclear molecules'</li><li>'the firehose instability or hosepipe instability is a dynamical instability of thin or elongated galaxies the instability causes the galaxy to buckle or bend in a direction perpendicular to its long axis after the instability has run its course the galaxy is less elongated ie rounder than before any sufficiently thin stellar system in which some component of the internal velocity is in the form of random or counterstreaming motions as opposed to rotation is subject to the instability the firehose instability is probably responsible for the fact that elliptical galaxies and dark matter haloes never have axis ratios more extreme than about 31 since this is roughly the axis ratio at which the instability sets in it may also play a role in the formation of barred spiral galaxies by causing the bar to thicken in the direction perpendicular to the galaxy diskthe firehose instability derives its name from a similar instability in magnetized plasmas however from a dynamical point of view a better analogy is with the kelvin – helmholtz instability or with beads sliding along an oscillating string the firehose instability can be analyzed exactly in the case of an infinitely thin selfgravitating sheet of stars if the sheet experiences a small displacement h x t displaystyle hxt in the z displaystyle z direction the vertical acceleration for stars of x displaystyle x velocity u displaystyle u as they move around the bend is a z ∂ ∂ t u ∂ ∂ x 2 h ∂ 2 h ∂ t 2 2 u ∂ 2 h ∂ t ∂ x u 2 ∂ 2 h ∂ x 2 displaystyle azleftpartial over partial tupartial over partial xright2hpartial 2h over partial t22upartial 2h over partial tpartial xu2partial 2h over partial x2 provided the bend is small enough that the horizontal velocity is unaffected averaged over all stars at x displaystyle x this acceleration must equal the gravitational restoring force per unit mass f x displaystyle fx in a frame chosen such that the mean streaming motions are zero this relation becomes ∂ 2 h ∂ t 2 σ u 2 ∂ 2 h ∂ x 2 − f z x t 0 displaystyle partial 2h over partial t2sigma u2partial 2h over partial x2fzxt0 where σ u displaystyle sigma u is the horizontal velocity dispersion in that frame for a perturbation of the form h x t h exp i k x − ω t displaystyle hxthexp leftmathrm i leftkxomega trightright the gravitational restoring force is f z x'</li></ul> |
| 18 | <ul><li>'the american institute of graphic arts aiga is a professional organization for design its members practice all forms of communication design including graphic design typography interaction design user experience branding and identity the organizations aim is to be the standard bearer for professional ethics and practices for the design profession there are currently over 25000 members and 72 chapters and more than 200 student groups around the united states in 2005 aiga changed its name to “ aiga the professional association for design ” dropping the american institute of graphic arts to welcome all design disciplines aiga aims to further design disciplines as professions as well as cultural assets as a whole aiga offers opportunities in exchange for creative new ideas scholarly research critical analysis and education advancement in 1911 frederic goudy alfred stieglitz and w a dwiggins came together to discuss the creation of an organization that was committed to individuals passionate about communication design in 1913 president of the national arts club john g agar announced the formation of the american institute of graphic arts during the eighth annual exhibition of “ the books of the year ” the national arts club was instrumental in the formation of aiga in that they helped to form the committee to plan to organize the organization the committee formed included charles dekay and william b howland and officially formed the american institute of graphic arts in 1914 howland publisher and editor of the outlook was elected president the goal of the group was to promote excellence in the graphic design profession through its network of local chapters throughout the countryin 1920 aiga began awarding medals to individuals who have set standards of excellence over a lifetime of work or have made individual contributions to innovation within the practice of design winners have been recognized for design teaching writing or leadership of the profession and may honor individuals posthumouslyin 1982 the new york chapter was formed and the organization began creating local chapters to decentralize leadershiprepresented by washington dc arts advocate and attorney james lorin silverberg esq the washington dc chapter of aiga was organized as the american institute of graphic arts incorporated washington dc on september 6 1984 the aiga in collaboration with the us department of transportation produced 50 standard symbols to be used on signs in airports and other transportation hubs and at large international events the first 34 symbols were published in 1974 receiving a presidential design award the remaining 16 designs were added in 1979 in 2012 aiga replaced all its competitions with a single competition called cased formerly called justified the stated aim of the competition is to demonstrate the collective success and impact of the design profession by celebrating the best in contemporary design through case studies between 1941 and 2011 aiga sponsored a juried contest for the 50 best designed'</li><li>'a vignette in graphic design is a french loanword meaning a unique form for a frame to an image either illustration or photograph rather than the images edges being rectilinear it is overlaid with decorative artwork featuring a unique outline this is similar to the use of the word in photography where the edges of an image that has been vignetted are nonlinear or sometimes softened with a mask – often a darkroom process of introducing a screen an oval vignette is probably the most common example originally a vignette was a design of vineleaves and tendrils vignette small vine in french the term was also used for a small embellishment without border in what otherwise would have been a blank space such as that found on a titlepage a headpiece or tailpiece the use in modern graphic design is derived from book publishing techniques dating back to the middle ages analytical bibliography ca 1450 to 1800 when a vignette referred to an engraved design printed using a copperplate press on a page that has already been printed on using a letter press printing press vignettes are sometimes distinguished from other intext illustrations printed on a copperplate press by the fact that they do not have a border such designs usually appear on titlepages only woodcuts which are printed on a letterpress and are also used to separate sections or chapters are identified as a headpiece tailpiece or printers ornament depending on shape and position calligraphy another conjunction of text and decoration curlicues flourishes in the arts usually composed of concentric circles often used in calligraphy scrollwork general name for scrolling abstract decoration used in many areas of the visual arts'</li><li>'archibald winterbottom was a british cotton cloth merchant who is best known for becoming the largest producer of bookcloth and tracing cloth in the world bookcloth became the dominant bookbinding material in the early 19th century which was much cheaper and easier to work with than leather revolutionising the manufacture and distribution of books winterbottom was born in linthwaite in the heart of the west riding of yorkshire the son of a third generation wool cloth merchant william whitehead winterbottom 1771 – 1842 and isabella nee dickson 1784 – 1849 not long after the family moved to the civil parish of saddleworth where winterbottom at the age of 15 left home in search of his fortune he reportedly promised his father that when he obtained a position he would “ do his utmost to succeed ” in 1829 winterbottom is said to have walked the 12 miles to manchester presumably seeking an apprenticeship beginning his working life as a clerk with the largest cotton merchants in manchester henry bannerman sons he remained with bannermans for the next twentythree years where he learned how to refine cloth to the highest degree and developed different finishes that could be applied to plain cloth at the age of nineteen he was appointed to manage their bradford accounts and to run their silesia department patenting a silvery finish lining which became known as dacians winterbottom was made a partner at bannermans aged thirty which he held for the next nine years manchester was at the heart of the cotton industry in britain during the 19th century which was a labourintensive sector at a time when half of the workforce were children in 1845 winterbottom married helen woolley whose family came from a unitarian tradition at the same time he became actively involved in the lancashire public school association lpsa founded in 1847 which was dominated by unitarians by 1852 winterbottom formed part of a delegation of the national public school association npa to present a draft bill to lord john russell at 10 downing street for the establishment of nondenominational free schools in england and wales ” he remained active within the npa listed as secretary to the general committee on education in 1857 but by 1862 the npa had achieved some of what it had set out to achieve and was dissolved winterbottom went on to work with the newly formed manchester educational aid society campaigning for compulsory primary education he spent the rest of his life actively involved in improving child welfare creating new schools and changing legislation to protect children by 1851 winterbottom had a successful career working at henry bannerman sons living in a prosperous neighbourhood in the northwest of manchester he had been gaining experience in working the machinery needed to'</li></ul> |
| 14 | <ul><li>'general anesthesia were enough to anesthetise the fetus all fetuses would be born sleepy after a cesarean section performed in general anesthesia which is not the case dr carlo v bellieni also agrees that the anesthesia that women receive for fetal surgery is not sufficient to anesthetize the fetus in 1985 questions about fetal pain were raised during congressional hearings concerning the silent screamin 2013 during the 113th congress representative trent franks introduced a bill called the paincapable unborn child protection act hr 1797 it passed in the house on june 18 2013 and was received in the us senate read twice and referred to the judiciary committeein 2004 during the 108th congress senator sam brownback introduced a bill called the unborn child pain awareness act for the stated purpose of ensuring that women seeking an abortion are fully informed regarding the pain experienced by their unborn child which was read twice and referred to committee subsequently 25 states have examined similar legislation related to fetal pain andor fetal anesthesia and in 2010 nebraska banned abortions after 20 weeks on the basis of fetal pain eight states – arkansas georgia louisiana minnesota oklahoma alaska south dakota and texas – have passed laws which introduced information on fetal pain in their stateissued abortioncounseling literature which one opponent of these laws the guttmacher institute founded by planned parenthood has called generally irrelevant and not in line with the current medical literature arthur caplan director of the center for bioethics at the university of pennsylvania said laws such as these reduce the process of informed consent to the reading of a fixed script created and mandated by politicians not doctors pain in babies prenatal development texas senate bill 5'</li><li>'somitogenesis is the process by which somites form somites are bilaterally paired blocks of paraxial mesoderm that form along the anteriorposterior axis of the developing embryo in segmented animals in vertebrates somites give rise to skeletal muscle cartilage tendons endothelium and dermis in somitogenesis somites form from the paraxial mesoderm a particular region of mesoderm in the neurulating embryo this tissue undergoes convergent extension as the primitive streak regresses or as the embryo gastrulates the notochord extends from the base of the head to the tail with it extend thick bands of paraxial mesodermas the primitive streak continues to regress somites form from the paraxial mesoderm by budding off rostrally as somitomeres or whorls of paraxial mesoderm cells compact and separate into discrete bodies the periodic nature of these splitting events has led many to say to that somitogenesis occurs via a clockwavefront model in which waves of developmental signals cause the periodic formation of new somites these immature somites then are compacted into an outer layer the epithelium and an inner mass the mesenchyme the somites themselves are specified according to their location as the segmental paraxial mesoderm from which they form it itself determined by position along the anteriorposterior axis before somitogenesis the cells within each somite are specified based on their location within the somite in addition they retain the ability to become any kind of somitederived structure until relatively late in the process of somitogenesis once the cells of the presomitic mesoderm are in place following cell migration during gastrulation oscillatory expression of many genes begins in these cells as if regulated by a developmental clock as mentioned previously this has led many to conclude that somitogenesis is coordinated by a clock and wave mechanism in technical terms this means that somitogenesis occurs due to the largely cellautonomous oscillations of a network of genes and gene products which causes cells to oscillate between a permissive and a nonpermissive state in a consistently timedfashion like a clock these genes include members of the fgf family wnt and notch pathway as well as targets of these pathways the wavefront progress slowly in a posteriortoanterior direction as the wavefront'</li><li>'the myometrium once these cells penetrate through the first few layers of cells of the decidua they lose their ability to proliferate and become invasive this departure from the cell cycle seems to be due to factors such as tgfβ and decorin although these invasive interstitial cytotrophoblasts can no longer divide they retain their ability to form syncytia multinucleated giant cells small syncytia are found in the placental bed and myometrium as a result of the fusion of interstitial cytotrophoblastsinterstitial cytotrophoblasts may also transform into endovascular cytotrophoblasts the primary function of the endovascular cytotrophoblast is to penetrate maternal spiral arteries and route the blood flow through the placenta for the growing embryo to use they arise from interstitial cytotrophoblasts from the process of phenocopying this changes the phenotype of these cells from epithelial to endothelial endovascular cytotrophoblasts like their interstitial predecessor are nonproliferating and invasive proper cytotrophoblast function is essential in the implantation of a blastocyst after hatching the embryonic pole of the blastocyst faces the uterine endometrium once they make contact the trophoblast begins to rapidly proliferate the cytotrophoblast secretes proteolytic enzymes to break down the extracellular matrix between the endometrial cells to allow fingerlike projections of trophoblast to penetrate through projections of cytotrophoblast and syncytiotrophoblast pull the embryo into the endometrium until it is fully covered by endometrial epithelium save for the coagulation plug the most common associated disorder is preeclampsia affecting approximately 7 of all births it is characterized by a failure of the cytotrophoblast to invade the uterus and its vasculature specifically the spiral arteries that the endovascular cytotrophoblast should invade the result of this is decreased blood flow to the fetus which may cause intrauterine growth restriction clinical symptoms of preeclampsia in the mother are most commonly high blood pressure proteinuria and edema conversely if there is too much invasion of uterine tissue by the trophoblast then'</li></ul> |
| 11 | <ul><li>'the chest wall this is a noninvasive highly accurate and quick assessment of the overall function of the heart tte utilizes several windows to image the heart from different perspectives each window has advantages and disadvantages for viewing specific structures within the heart and typically numerous windows are utilized within the same study to fully assess the heart parasternal long and parasternal short axis windows are taken next to the sternum the apical twothreefour chamber windows are taken from the apex of the heart lower left side and the subcostal window is taken from underneath the edge of the last rib tte utilizes one m mode two and threedimensional ultrasound time is implicit and not included from the different windows these can be combined with pulse wave or continuous wave doppler to visualize the velocity of blood flow and structure movements images can be enhanced with contrast that are typically some sort of micro bubble suspension that reflect the ultrasound waves a transesophageal echocardiogram is an alternative way to perform an echocardiogram a specialized probe containing an ultrasound transducer at its tip is passed into the patients esophagus via the mouth allowing image and doppler evaluation from a location directly behind the heart it is most often used when transthoracic images are suboptimal and when a clearer and more precise image is needed for assessment this test is performed in the presence of a cardiologist anesthesiologist registered nurse and ultrasound technologist conscious sedation andor localized numbing medication may be used to make the patient more comfortable during the procedure tee unlike tte does not have discrete windows to view the heart the entire esophagus and stomach can be utilized and the probe advanced or removed along this dimension to alter the perspective on the heart most probes include the ability to deflect the tip of the probe in one or two dimensions to further refine the perspective of the heart additionally the ultrasound crystal is often a twodimension crystal and the ultrasound plane being used can be rotated electronically to permit an additional dimension to optimize views of the heart structures often movement in all of these dimensions is needed tee can be used as standalone procedures or incorporated into catheter or surgicalbased procedures for example during a valve replacement surgery the tee can be used to assess the valve function immediately before repairreplacement and immediately after this permits revising the valve midsurgery if needed to improve outcomes of the surgery a stress echocardiogram also known as a stress echo uses ultrasound imaging of the heart to'</li><li>'and arms within the cranium the two vertebral arteries fuse into the basilar artery posterior inferior cerebellar artery pica basilar artery supplies the midbrain cerebellum and usually branches into the posterior cerebral artery anterior inferior cerebellar artery aica pontine branches superior cerebellar artery sca posterior cerebral artery pca posterior communicating artery the venous drainage of the cerebrum can be separated into two subdivisions superficial and deep the superficial systemthe superficial system is composed of dural venous sinuses sinuses channels within the dura mater the dural sinuses are therefore located on the surface of the cerebrum the most prominent of these sinuses is the superior sagittal sinus which is located in the sagittal plane under the midline of the cerebral vault posteriorly and inferiorly to the confluence of sinuses where the superficial drainage joins with the sinus that primarily drains the deep venous system from here two transverse sinuses bifurcate and travel laterally and inferiorly in an sshaped curve that forms the sigmoid sinuses which go on to form the two jugular veins in the neck the jugular veins parallel the upward course of the carotid arteries and drain blood into the superior vena cava the veins puncture the relevant dural sinus piercing the arachnoid and dura mater as bridging veins that drain their contents into the sinus the deep venous systemthe deep venous system is primarily composed of traditional veins inside the deep structures of the brain which join behind the midbrain to form the great cerebral vein vein of galen this vein merges with the inferior sagittal sinus to form the straight sinus which then joins the superficial venous system mentioned above at the confluence of sinuses cerebral blood flow cbf is the blood supply to the brain in a given period of time in an adult cbf is typically 750 millilitres per minute or 15 of the cardiac output this equates to an average perfusion of 50 to 54 millilitres of blood per 100 grams of brain tissue per minute cbf is tightly regulated to meet the brains metabolic demands too much blood a clinical condition of a normal homeostatic response of hyperemia can raise intracranial pressure icp which can compress and damage delicate brain tissue too little blood flow ischemia results if blood flow to the brain is below 18 to 20 ml per 100 g per minute and tissue death occurs if flow dips below 8 to'</li><li>'##ie b infection it is mostly unnecessary for treatment purposes to diagnose which virus is causing the symptoms in question though it may be epidemiologically useful coxsackie b infections usually do not cause serious disease although for newborns in the first 1 – 2 weeks of life coxsackie b infections can easily be fatal the pancreas is a frequent target which can cause pancreatitiscoxsackie b3 cb3 infections are the most common enterovirus cause of myocarditis and sudden cardiac death cb3 infection causes ion channel pathology in the heart leading to ventricular arrhythmia studies in mice suggest that cb3 enters cells by means of tolllike receptor 4 both cb3 and cb4 exploit cellular autophagy to promote replication the b4 coxsackie viruses cb4 serotype was suggested to be a possible cause of diabetes mellitus type 1 t1d an autoimmune response to coxsackie virus b infection upon the islets of langerhans may be a cause of t1dother research implicates strains b1 a4 a2 and a16 in the destruction of beta cells with some suggestion that strains b3 and b6 may have protective effects via immunological crossprotection as of 2008 there is no wellaccepted treatment for the coxsackie b group of viruses palliative care is available however and patients with chest pain or stiffness of the neck should be examined for signs of cardiac or central nervous system involvement respectively some measure of prevention can usually be achieved by basic sanitation on the part of foodservice workers though the viruses are highly contagious care should be taken in washing ones hands and in cleaning the body after swimming in the event of coxsackieinduced myocarditis or pericarditis antiinflammatories can be given to reduce damage to the heart muscle enteroviruses are usually only capable of acute infections that are rapidly cleared by the adaptive immune response however mutations which enterovirus b serotypes such as coxsackievirus b and echovirus acquire in the host during the acute phase can transform these viruses into the noncytolytic form also known as noncytopathic or defective enterovirus this form is a mutated quasispecies of enterovirus which is capable of causing persistent infection in human tissues and such infections have been found in the pancreas in type 1 diabetes in chronic myocarditis and dilated cardiomyopathy in valvular'</li></ul> |
| 41 | <ul><li>'survey placename datathe ons has produced census results from urban areas since 1951 since 1981 based upon the extent of irreversible urban development indicated on ordnance survey maps the definition is an extent of at least 20 ha and at least 1500 census residents separate areas are linked if less than 200 m 220 yd apart included are transportation features the uk has five urban areas with a population over a million and a further sixty nine with a population over one hundred thousand australia the australian bureau of statistics refers to urban areas as urban centres which it generally defines as population clusters of 1000 or more people australia is one of the most urbanised countries in the world with more than 50 of the population residing in australias three biggest urban centres new zealand statistics new zealand defines urban areas in new zealand which are independent of any administrative subdivisions and have no legal basis there are four classes of urban area major urban areas population 100000 large urban areas population 30000 – 99999 medium urban areas population 10000 – 29999 and small urban areas population 1000 – 9999 as of 2021 there are 7 major urban areas 13 large urban areas 22 medium urban areas and 136 small urban areas urban areas are reclassified after each new zealand census so population changes between censuses does not change an urban areas classification canada according to statistics canada an urban area in canada is an area with a population of at least 1000 people where the density is no fewer than 400 persons per square kilometre 1000sq mi if two or more urban areas are within 2 km 12 mi of each other by road they are merged into a single urban area provided they do not cross census metropolitan area or census agglomeration boundariesin the canada 2011 census statistics canada redesignated urban areas with the new term population centre the new term was chosen in order to better reflect the fact that urban vs rural is not a strict division but rather a continuum within which several distinct settlement patterns may exist for example a community may fit a strictly statistical definition of an urban area but may not be commonly thought of as urban because it has a smaller population or functions socially and economically as a suburb of another urban area rather than as a selfcontained urban entity or is geographically remote from other urban communities accordingly the new definition set out three distinct types of population centres small population 1000 to 29999 medium population 30000 to 99999 and large population 100000 or greater despite the change in terminology however the demographic definition of a population centre remains unchanged from that of an urban area a population of at least 1000 people where the density is no fewer than 400 persons per km2 mexico mexico'</li><li>'neighbourhoods green is an english partnership initiative which works with social landlords and housing associations to highlight the importance of open and green space for residents and raise the overall quality of design and management with these groups the partnership was established in 2003 when peabody trust and notting hill housing group held a conference which identified the need to raise the profile of the green and open spaces owned and managed by social landlords the scheme attracted praise from the then minister for parks and green spaces yvette coopersince 2003 the partnership has expanded to include national housing federation groundwork the wildlife trusts landscape institute green flag award royal horticultural society natural england and cabe it is overseen by a steering group which includes representatives from circle housing group great places housing group helena homes london borough of hammersmith fulham medina housing new charter housing trust notting hill housing peabody trust places for people regenda group and wakefield district housing neighbourhoods green has three main areas of emphasis it produces best practice guidance highlighting the contribution parks gardens and play areas make to the quality of life for residents – including the mitigation of climate change promotion of biodiversity and aesthetic qualities it also generates a number of case studies from housing associations and community groups and offers training for landlords residents and partners on areas such as playspace green infrastructure and growing foodin 2011 working in conjunction with university of sheffield and the national housing federation neighbourhoods green produced greener neighbourhoods a best practice guide to managing green space for social housing its ten principles for housing green space were commit to quality involve residents know the bigger picture make the best use of funding design for local people develop training and skills maintain high standards make places feel safe promote healthy living prepare for climate changeduring 201314 neighbourhoods green will be working with keep britain tidy to support the expansion of the green flag award into the social housing sector'</li><li>'matrix planning methodology was set in place the ct method principles are the foundation of the design implementation and management of this metropolitan plan'</li></ul> |
| 22 | <ul><li>'time of concentration is a concept used in hydrology to measure the response of a watershed to a rain event it is defined as the time needed for water to flow from the most remote point in a watershed to the watershed outlet it is a function of the topography geology and land use within the watershed a number of methods can be used to calculate time of concentration including the kirpich 1940 and nrcs 1997 methods time of concentration is useful in predicting flow rates that would result from hypothetical storms which are based on statistically derived return periods through idf curves for many often economic reasons it is important for engineers and hydrologists to be able to accurately predict the response of a watershed to a given rain event this can be important for infrastructure development design of bridges culverts etc and management as well as to assess flood risk such as the arkstormscenario this image shows the basic principle which leads to determination of the time of concentration much like a topographic map showing lines of equal elevation a map with isolines can be constructed to show locations with the same travel time to the watershed outlet in this simplified example the watershed outlet is located at the bottom of the picture with a stream flowing through it moving up the map we can say that rainfall which lands on all of the places along the first yellow line will reach the watershed outlet at exactly the same time this is true for every yellow line with each line further away from the outlet corresponding to a greater travel time for runoff traveling to the outlet furthermore as this image shows the spatial representation of travel time can be transformed into a cumulative distribution plot detailing how travel times are distributed throughout the area of the watershed'</li><li>'equation d s t d t displaystyle dstdt describes how the soil saturation changes over time the terms on the right hand side describe the rates of rainfall r displaystyle r interception i displaystyle i runoff q displaystyle q evapotranspiration e displaystyle e and leakage l displaystyle l these are typically given in millimeters per day mmd runoff evaporation and leakage are all highly dependent on the soil saturation at a given time in order to solve the equation the rate of evapotranspiration as a function of soil moisture must be known the model generally used to describe it states that above a certain saturation evaporation will only be dependent on climate factors such as available sunlight once below this point soil moisture imposes controls on evapotranspiration and it decreases until the soil reaches the point where the vegetation can no longer extract any more water this soil level is generally referred to as the permanent wilting point use of this term can lead to confusion because many plant species do not actually wilt the damkohler number is a unitless ratio that predicts whether the duration in which a particular nutrient or solute is in specific pool or flux of water will be sufficient time for a specific reaction to occur d a f r a c t t r a n s p o r t t r e a c t i o n displaystyle dafracttransporttreaction where t is the time of either the transport or the reaction transport time can be substituted for t exposure to determine if a reaction can realistically occur depending on during how much of the transport time the reactant will be exposed to the correct conditions to react a damkohler number greater than 1 signifies that the reaction has time to react completely whereas the opposite is true for a damkohler number less than 1 darcys law is an equation that describes the flow of a fluid through a porous medium the law was formulated by henry darcy in the early 1800s when he was charged with the task to bring water through an aquifer to the town of dijon france henry conducted various experiments on the flow of water through beds of sand to derive the equation q − k a x f r a c h l displaystyle qkaxfrachl where q is discharge measured in m3sec k is hydraulic conductivity ms a is cross sectional area that the water travels m2 where h is change in height over the gradual distance of the aquifer m where l is the length of the aquifer or distance the water'</li><li>'##s power extended even to the high water mark and into the main streamsin the united states the high water mark is also significant because the united states constitution gives congress the authority to legislate for waterways and the high water mark is used to determine the geographic extent of that authority federal regulations 33 cfr 3283e define the ordinary high water mark ohwm as that line on the shore established by the fluctuations of water and indicated by physical characteristics such as a clear natural line impressed on the bank shelving changes in the character of soil destruction of terrestrial vegetation the presence of litter and debris or other appropriate means that consider the characteristics of the surrounding areas for the purposes of section 404 of the clean water act the ohwm defines the lateral limits of federal jurisdiction over nontidal water bodies in the absence of adjacent wetlands for the purposes of sections 9 and 10 of the rivers and harbors act of 1899 the ohwm defines the lateral limits of federal jurisdiction over traditional navigable waters of the us the ohwm is used by the united states army corps of engineers the united states environmental protection agency and other federal agencies to determine the geographical extent of their regulatory programs likewise many states use similar definitions of the ohwm for the purposes of their own regulatory programs in 2016 the court of appeals of indiana ruled that land below the ohwm as defined by common law along lake michigan is held by the state in trust for public use chart datum mean high water measuring storm surge terrace geology benches left by lakes wash margin'</li></ul> |
| 35 | <ul><li>'field would be elevated levels of bicarbonate hco−3 sodium and silica ions in the water runoff the breakdown of carbonate minerals caco 3 h 2 co 3 [UNK] − − [UNK] ca 2 2 hco 3 − displaystyle ce caco3 h2co3 ca2 2 hco3 caco 3 [UNK] − − [UNK] ca 2 co 3 2 − displaystyle ce caco3 ca2 co32 the further dissolution of carbonic acid h2co3 and bicarbonate hco−3 produces co2 gas oxidization is also a major contributor to the breakdown of many silicate minerals and formation of secondary minerals diagenesis in the early soil profile oxidation of olivine femgsio4 releases fe mg and si ions the mg is soluble in water and is carried in the runoff but the fe often reacts with oxygen to precipitate fe2o3 hematite the oxidized state of iron oxide sulfur a byproduct of decaying organic material will also react with iron to form pyrite fes2 in reducing environments pyrite dissolution leads to low ph levels due to elevated h ions and further precipitation of fe2o3 ultimately changing the redox conditions of the environment inputs from the biosphere may begin with lichen and other microorganisms that secrete oxalic acid these microorganisms associated with the lichen community or independently inhabiting rocks include a number of bluegreen algae green algae various fungi and numerous bacteria lichen has long been viewed as the pioneers of soil development as the following 1997 isozaki statement suggests the initial conversion of rock into soil is carried on by the pioneer lichens and their successors the mosses in which the hairlike rhizoids assume the role of roots in breaking down the surface into fine dust however lichens are not necessarily the only pioneering organisms nor the earliest form of soil formation as it has been documented that seedbearing plants may occupy an area and colonize quicker than lichen also eolian sedimentation wind generated can produce high rates of sediment accumulation nonetheless lichen can certainly withstand harsher conditions than most vascular plants and although they have slower colonization rates do form the dominant group in alpine regions organic acids released from plant roots include acetic acid and citric acid during the decay of organic matter phenolic acids are released from plant matter and humic acid and fulvic acid are released by soil microbes these organic acids speed up chemical weathering by combining with some of the weathering products in a process known'</li><li>'parent material is the underlying geological material generally bedrock or a superficial or drift deposit in which soil horizons form soils typically inherit a great deal of structure and minerals from their parent material and as such are often classified based upon their contents of consolidated or unconsolidated mineral material that has undergone some degree of physical or chemical weathering and the mode by which the materials were most recently transported parent materials that are predominantly composed of consolidated rock are termed residual parent material the consolidated rocks consist of igneous sedimentary and metamorphic rock etc soil developed in residual parent material is that which forms in consolidated geologic material this parent material is loosely arranged particles are not cemented together and not stratified this parent material is classified by its last means of transport for example material that was transported to a location by glacier then deposited elsewhere by streams is classified as streamtransported parent material or glacial fluvial parent material glacial till morrainal the material dragged with a moving ice sheet because it is not transported with liquid water the material is not sorted by size there are two kinds of glacial till basal till carried at the base of the glacier and laid underneath it this till is typically very compacted and does not allow for quick water infiltration ablation till carried on or in the glacier and is laid down as the glacier melts this till is typically less compacted than basal till glaciolacustrine parent material that is created from the sediments coming into lakes that come from glaciers the lakes are typically ice margin lakes or other types formed from glacial erosion or deposition the bedload of the rivers containing the larger rocks and stones is deposited near the lake edge while the suspended sediments are settle out all over the lake bed glaciofluvial consist of boulders gravel sand silt and clay from ice sheets or glaciers they are transported sorted and deposited by streams of water the deposits are formed beside below or downstream from the ice glaciomarine these sediments are created when sediments have been transported to the oceans by glaciers or icebergs they may contain large boulders transported by and dropped from icebergs in the midst of finegrained sediments within water transported parent material there are several important types alluvium parent material transported by streams of which there are three main types floodplains are the parts of river valleys that are covered with water during floods due to their seasonal nature floods create stratified layers in which larger particles tend to settle nearer the channel and smaller particles settle nearer the edges of the flooding area alluvial fans are sedimentary areas formed by narrow valley streams that suddenly drop to lowlands'</li><li>'uses the physics of ice formation to develop a layeredhybrid material specifically ceramic suspensions are directionally frozen under conditions designed to promote the formation of lamellar ice crystals which expel the ceramic particles as they grow after sublimation of the water this results in a layered homogeneous ceramic scaffold that architecturally is a negative replica of the ice the scaffold can then be filled with a second soft phase so as to create a hard – soft layered composite this strategy is also widely applied to build other kinds of bioinspired materials like extremely strong and tough hydrogels metalceramic and polymerceramic hybrid biomimetic materials with fine lamellar or brickandmortar architectures the brick layer is extremely strong but brittle and the soft mortar layer between the bricks generates limited deformation thereby allowing for the relief of locally high stresses while also providing ductility without too much loss in strength additive manufacturing encompasses a family of technologies that draw on computer designs to build structures layer by layer recently a lot of bioinspired materials with elegant hierarchical motifs have been built with features ranging in size from tens of micrometers to one submicrometer therefore the crack of materials only can happen and propagate on the microscopic scale which wouldnt lead to the fracture of the whole structure however the timeconsuming of manufacturing the hierarchical mechanical materials especially on the nano and microscale limited the further application of this technique in largescale manufacturing layerbylayer deposition is a technique that as suggested by its name consists of a layerbylayer assembly to make multilayered composites like nacre some examples of efforts in this direction include alternating layers of hard and soft components of tinpt with an ion beam system the composites made by this sequential deposition technique do not have a segmented layered microstructure thus sequential adsorption has been proposed to overcome this limitation and consists of repeatedly adsorbing electrolytes and rinsing the tablets which results in multilayers thin film deposition focuses on reproducing the crosslamellar microstructure of conch instead of mimicking the layered structure of nacre using microelectro mechanical systems mems among mollusk shells the conch shell has the highest degree of structural organization the mineral aragonite and organic matrix are replaced by polysilicon and photoresist the mems technology repeatedly deposits a thin silicon film the interfaces are etched by reactive ion etching and then filled with photoresist there are three films deposited consecutively although the mems technology is expensive and more timeconsum'</li></ul> |
| 1 | <ul><li>'aerodynamics is a branch of dynamics concerned with the study of the motion of air it is a subfield of fluid and gas dynamics and the term aerodynamics is often used when referring to fluid dynamics early records of fundamental aerodynamic concepts date back to the work of aristotle and archimedes in the 2nd and 3rd centuries bc but efforts to develop a quantitative theory of airflow did not begin until the 18th century in 1726 isaac newton became one of the first aerodynamicists in the modern sense when he developed a theory of air resistance which was later verified for low flow speeds air resistance experiments were performed by investigators throughout the 18th and 19th centuries aided by the construction of the first wind tunnel in 1871 in his 1738 publication hydrodynamica daniel bernoulli described a fundamental relationship between pressure velocity and density now termed bernoullis principle which provides one method of explaining lift aerodynamics work throughout the 19th century sought to achieve heavierthanair flight george cayley developed the concept of the modern fixedwing aircraft in 1799 and in doing so identified the four fundamental forces of flight lift thrust drag and weight the development of reasonable predictions of the thrust needed to power flight in conjunction with the development of highlift lowdrag airfoils paved the way for the first powered flight on december 17 1903 wilbur and orville wright flew the first successful powered aircraft the flight and the publicity it received led to more organized collaboration between aviators and aerodynamicists leading the way to modern aerodynamics theoretical advances in aerodynamics were made parallel to practical ones the relationship described by bernoulli was found to be valid only for incompressible inviscid flow in 1757 leonhard euler published the euler equations extending bernoullis principle to the compressible flow regime in the early 19th century the development of the navierstokes equations extended the euler equations to account for viscous effects during the time of the first flights several investigators developed independent theories connecting flow circulation to lift ludwig prandtl became one of the first people to investigate boundary layers during this time although the modern theory of aerodynamic science did not emerge until the 18th century its foundations began to emerge in ancient times the fundamental aerodynamics continuity assumption has its origins in aristotles treatise on the heavens although archimedes working in the 3rd century bc was the first person to formally assert that a fluid could be treated as a continuum archimedes also introduced the concept that fluid flow was driven by a pressure gradient within the fluid this idea would later prove fundamental to the understanding of fluid flow in 1687 newtons principia presented newtons laws'</li><li>'the yaw drive is an important component of the horizontal axis wind turbines yaw system to ensure the wind turbine is producing the maximal amount of electric energy at all times the yaw drive is used to keep the rotor facing into the wind as the wind direction changes this only applies for wind turbines with a horizontal axis rotor the wind turbine is said to have a yaw error if the rotor is not aligned to the wind a yaw error implies that a lower share of the energy in the wind will be running through the rotor area the generated energy will be approximately proportional to the cosine of the yaw error when the windmills of the 18th century included the feature of rotor orientation via the rotation of the nacelle an actuation mechanism able to provide that turning moment was necessary initially the windmills used ropes or chains extending from the nacelle to the ground in order to allow the rotation of the nacelle by means of human or animal power another historical innovation was the fantail this device was actually an auxiliary rotor equipped with plurality of blades and located downwind of the main rotor behind the nacelle in a 90° approximately orientation to the main rotor sweep plane in the event of change in wind direction the fantail would rotate thus transmitting its mechanical power through a gearbox and via a gearrimtopinion mesh to the tower of the windmill the effect of the aforementioned transmission was the rotation of the nacelle towards the direction of the wind where the fantail would not face the wind thus stop turning ie the nacelle would stop to its new positionthe modern yaw drives even though electronically controlled and equipped with large electric motors and planetary gearboxes have great similarities to the old windmill concept the main categories of yaw drives are the electric yaw drives commonly used in almost all modern turbines the hydraulic yaw drive hardly ever used anymore on modern wind turbines the gearbox of the yaw drive is a very crucial component since it is required to handle very large moments while requiring the minimal amount of maintenance and perform reliably for the whole lifespan of the wind turbine approx 20 years most of the yaw drive gearboxes have input to output ratios in the range of 20001 in order to produce the enormous turning moments required for the rotation of the wind turbine nacelle the gearrim and the pinions of the yaw drives are the components that finally transmit the turning moment from the yaw drives to the tower in order to turn the nacelle of the wind turbine around the tower axis z axis the main characteristics of the gearrim are its'</li><li>'the development of aerodynamics such as theodore von karman and max munk compressibility is an important factor in aerodynamics at low speeds the compressibility of air is not significant in relation to aircraft design but as the airflow nears and exceeds the speed of sound a host of new aerodynamic effects become important in the design of aircraft these effects often several of them at a time made it very difficult for world war ii era aircraft to reach speeds much beyond 800 kmh 500 mph some of the minor effects include changes to the airflow that lead to problems in control for instance the p38 lightning with its thick highlift wing had a particular problem in highspeed dives that led to a nosedown condition pilots would enter dives and then find that they could no longer control the plane which continued to nose over until it crashed the problem was remedied by adding a dive flap beneath the wing which altered the center of pressure distribution so that the wing would not lose its lifta similar problem affected some models of the supermarine spitfire at high speeds the ailerons could apply more torque than the spitfires thin wings could handle and the entire wing would twist in the opposite direction this meant that the plane would roll in the direction opposite to that which the pilot intended and led to a number of accidents earlier models werent fast enough for this to be a problem and so it wasnt noticed until later model spitfires like the mkix started to appear this was mitigated by adding considerable torsional rigidity to the wings and was wholly cured when the mkxiv was introduced the messerschmitt bf 109 and mitsubishi zero had the exact opposite problem in which the controls became ineffective at higher speeds the pilot simply couldnt move the controls because there was too much airflow over the control surfaces the planes would become difficult to maneuver and at high enough speeds aircraft without this problem could outturn them these problems were eventually solved as jet aircraft reached transonic and supersonic speeds german scientists in wwii experimented with swept wings their research was applied on the mig15 and f86 sabre and bombers such as the b47 stratojet used swept wings which delay the onset of shock waves and reduce drag in order to maintain control near and above the speed of sound it is often necessary to use either poweroperated allflying tailplanes stabilators or delta wings fitted with poweroperated elevons power operation prevents aerodynamic forces overriding the pilots control inputs finally another common problem that fits into this category is flutter at some speeds the airflow over the control'</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 0.6763 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("udrearobert999/multi-qa-mpnet-base-cos-v1-scon-poc")
# Run inference
preds = model("no solutions to x n y n z n displaystyle xnynzn for all n ≥ 3 displaystyle ngeq 3 this claim appears in his annotations in the margins of his copy of diophantus euler the interest of leonhard euler 1707 – 1783 in number theory was first spurred in 1729 when a friend of his the amateur goldbach pointed him towards some of fermats work on the subject this has been called the rebirth of modern number theory after fermats relative lack of success in getting his contemporaries attention for the subject eulers work on number theory includes the following proofs for fermats statements this includes fermats little theorem generalised by euler to nonprime moduli the fact that p x 2 y 2 displaystyle px2y2 if and only if p ≡ 1 mod 4 displaystyle pequiv 1bmod 4 initial work towards a proof that every integer is the sum of four squares the first complete proof is by josephlouis lagrange 1770 soon improved by euler himself the lack of nonzero integer solutions to x 4 y 4 z 2 displaystyle x4y4z2 implying the case n4 of fermats last theorem the case n3 of which euler also proved by a related method pells equation first misnamed by euler he wrote on the link between continued fractions and pells equation first steps towards analytic number theory in his work of sums of four squares partitions pentagonal numbers and the distribution of prime numbers euler pioneered the use of what can be seen as analysis in particular infinite series in number theory since he lived before the development of complex analysis most of his work is restricted to the formal manipulation of power series he did however do some very notable though not fully rigorous early work on what would later be called the riemann zeta function quadratic forms following fermats lead euler did further research on the question of which primes can be expressed in the form x 2 n y 2 displaystyle x2ny2 some of it prefiguring quadratic reciprocity diophantine equations euler worked on some diophantine equations of genus 0 and 1 in particular he studied diophantuss work he tried to systematise it but the time was not yet ripe for such an endeavour — algebraic geometry was still in its infancy he did notice there was a connection between diophantine problems and elliptic integrals whose study he had himself initiated lagrange legendre and gauss josephlouis")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:---------|:----|
| Word count | 2 | 375.0186 | 509 |
| Label | Training Sample Count |
|:------|:----------------------|
| 0 | 10 |
| 1 | 10 |
| 2 | 10 |
| 3 | 10 |
| 4 | 10 |
| 5 | 10 |
| 6 | 10 |
| 7 | 10 |
| 8 | 10 |
| 9 | 10 |
| 10 | 10 |
| 11 | 10 |
| 12 | 10 |
| 13 | 10 |
| 14 | 10 |
| 15 | 10 |
| 16 | 10 |
| 17 | 10 |
| 18 | 10 |
| 19 | 10 |
| 20 | 10 |
| 21 | 10 |
| 22 | 10 |
| 23 | 10 |
| 24 | 10 |
| 25 | 10 |
| 26 | 10 |
| 27 | 10 |
| 28 | 10 |
| 29 | 10 |
| 30 | 10 |
| 31 | 10 |
| 32 | 10 |
| 33 | 10 |
| 34 | 10 |
| 35 | 10 |
| 36 | 10 |
| 37 | 10 |
| 38 | 10 |
| 39 | 10 |
| 40 | 10 |
| 41 | 10 |
| 42 | 10 |
### Training Hyperparameters
- batch_size: (16, 16)
- num_epochs: (4, 8)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 20
- body_learning_rate: (2e-05, 0.01)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- max_length: 512
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: True
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0009 | 1 | 0.2745 | - |
| 0.9302 | 1000 | 0.0015 | - |
| 1.8605 | 2000 | 0.0015 | - |
| 2.7907 | 3000 | 0.0007 | - |
| 3.7209 | 4000 | 0.0002 | - |
### Framework Versions
- Python: 3.10.12
- SetFit: 1.0.3
- Sentence Transformers: 2.7.0
- Transformers: 4.40.1
- PyTorch: 2.2.1+cu121
- Datasets: 2.19.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> | {"library_name": "setfit", "tags": ["setfit", "sentence-transformers", "text-classification", "generated_from_setfit_trainer"], "metrics": ["accuracy"], "base_model": "sentence-transformers/multi-qa-mpnet-base-cos-v1", "widget": [{"text": "authority to select projects and mandated new metropolitan planning initiatives for the first time state transportation officials were required to consult seriously with local representatives on mpo governing boards regarding matters of project prioritization and decisionmaking these changes had their roots in the need to address increasingly difficult transportation problems \u2014 in particular the more complicated patterns of traffic congestion that arose with the suburban development boom in the previous decades many recognized that the problems could only be addressed effectively through a stronger federal commitment to regional planning the legislation that emerged the intermodal surface transportation efficiency act istea was signed into federal law by president george h w bush in december 1991 it focused on improving transportation not as an end in itself but as the means to achieve important national goals including economic progress cleaner air energy conservation and social equity istea promoted a transportation system in which different modes and facilities \u2014 highway transit pedestrian bicycle aviation and marine \u2014 were integrated to allow a seamless movement of both goods and people new funding programs provided greater flexibility in the use of funds particularly regarding using previously restricted highway funds for transit development improved intermodal connections and emphasized upgrades to existing facilities over building new capacity \u2014 particularly roadway capacity to accomplish more serious metropolitan planning istea doubled federal funding for mpo operations and required the agencies to evaluate a variety of multimodal solutions to roadway congestion and other transportation problems mpos also were required to broaden public participation in the planning process and to see that investment decisions contributed to meeting the air quality standards of the clean air act amendments in addition istea placed a new requirement on mpos to conduct fiscally constrained planning and ensure that longrange transportation plans and shortterm transportation improvement programs were fiscally constrained in other words adopted plans and programs can not include more projects than reasonably can be expected to be funded through existing or projected sources of revenues this new requirement represented a major conceptual shift for many mpos and others in the planning community since the imposition of fiscal discipline on plans now required not only understanding how much money might be available but how to prioritize investment needs and make difficult choices among competing needs adding to this complexity is the need to plan across transportation modes and develop approaches for multimodal investment prioritization and decision making it is in this context of greater prominence funding and requirements that mpos function today an annual element is composed of transportation improvement projects contained in an areas transportation improvement program tip which is proposed for implementation during the current year the annual element is submitted to the us department of transportation as part of the required planning process the passage of safe accountable flexible efficient transportation equity act a legacy for users safetealu"}, {"text": "##pignygiroux served as an assistant professor from 1997 2003 associate professor from 2003 2014 chair of the department of geography from 2015 2018 and professor beginning in 2014 with secondary appointments in department of geology the college of education social services and rubenstein school of environment natural resources she teaches courses in meteorology climatology physical geography remote sensing and landsurface processes in her work as state climatologist for vermont dupignygiroux uses her expertise hydrology and extreme weather such as floods droughts and storms to keep the residents of vermont informed on how climate change will affect their homes health and livelihoods she assists other state agencies in preparing for and adapting to current and future impacts of climate change on vermonts transportation system emergency management planning and agriculture and forestry industries for example she has published analyses of the impacts of climate change on the health of vermonts sugar maples a hardwood species of key economic and cultural importance to the state as cochair of vermonts state \u2019 s drought task force she played a key role in developing the 2018 vermont state hazard mitigation plandupignygiroux served as secretary for the american association of state climatologists from 20102011 and president elect from 20192020 in june 2020 she was elected as president of the american association of state climatologists which is a twoyear term in addition to her research on climate change dupignygiroux is known for her efforts to research and promote climate literacy climate literacy is an understanding of the influences of and influences on the climate system including how people change the climate how climate metrics are observed and modelled and how climate change affects society \u201c being climate literate is more critical than ever before \u201d lesleyann dupignygiroux stated for a 2020 article on climate literacy \u201c if we do not understand weather climate and climate change as intricate and interconnected systems then our appreciation of the big picture is lost \u201d dupignygiroux is known for her climate literacy work with elementary and high school teachers and students she cofounded the satellites weather and climate swac project in 2008 which is a professional development program for k12 teachers designed to promote climate literacy and interest in the stem science technology engineering and mathematics careers dupignygiroux is also a founding member of the climate literacy and energy awareness network clean formerly climate literacy network a communitybased effort to support climate literacy and communication in a 2016 interview dupignygiroux stated \u201c sharing knowledge and giving back to my community are my two axioms in life watching students mature and flourish in"}, {"text": "no solutions to x n y n z n displaystyle xnynzn for all n \u2265 3 displaystyle ngeq 3 this claim appears in his annotations in the margins of his copy of diophantus euler the interest of leonhard euler 1707 \u2013 1783 in number theory was first spurred in 1729 when a friend of his the amateur goldbach pointed him towards some of fermats work on the subject this has been called the rebirth of modern number theory after fermats relative lack of success in getting his contemporaries attention for the subject eulers work on number theory includes the following proofs for fermats statements this includes fermats little theorem generalised by euler to nonprime moduli the fact that p x 2 y 2 displaystyle px2y2 if and only if p \u2261 1 mod 4 displaystyle pequiv 1bmod 4 initial work towards a proof that every integer is the sum of four squares the first complete proof is by josephlouis lagrange 1770 soon improved by euler himself the lack of nonzero integer solutions to x 4 y 4 z 2 displaystyle x4y4z2 implying the case n4 of fermats last theorem the case n3 of which euler also proved by a related method pells equation first misnamed by euler he wrote on the link between continued fractions and pells equation first steps towards analytic number theory in his work of sums of four squares partitions pentagonal numbers and the distribution of prime numbers euler pioneered the use of what can be seen as analysis in particular infinite series in number theory since he lived before the development of complex analysis most of his work is restricted to the formal manipulation of power series he did however do some very notable though not fully rigorous early work on what would later be called the riemann zeta function quadratic forms following fermats lead euler did further research on the question of which primes can be expressed in the form x 2 n y 2 displaystyle x2ny2 some of it prefiguring quadratic reciprocity diophantine equations euler worked on some diophantine equations of genus 0 and 1 in particular he studied diophantuss work he tried to systematise it but the time was not yet ripe for such an endeavour \u2014 algebraic geometry was still in its infancy he did notice there was a connection between diophantine problems and elliptic integrals whose study he had himself initiated lagrange legendre and gauss josephlouis"}, {"text": "sediment profile imagery spi is an underwater technique for photographing the interface between the seabed and the overlying water the technique is used to measure or estimate biological chemical and physical processes occurring in the first few centimetres of sediment pore water and the important benthic boundary layer of water timelapse imaging tspi is used to examine biological activity over natural cycles like tides and daylight or anthropogenic variables like feeding loads in aquaculture spi systems cost between tens and hundreds of thousands of dollars and weigh between 20 and 400 kilograms traditional spi units can be effectively used to explore continental shelf and abyssal depths recently developed spiscan or rspi rotational spi systems can now also be used to inexpensively investigate shallow 50m freshwater estuarine and marine systems humans are strongly visually oriented we like information in the form of pictures and are able to integrate many different kinds of data when they are presented in one or more images it seems natural to seek a way of directly imaging the sedimentwater interface in order to investigate animalsediment interactions in the marine benthos rhoads and cande 1971 took pictures of the sedimentwater interface at high resolution submillimetre over small spatial scales centimetres in order to examine benthic patterns through time or over large spatial scales kilometres rapidly slicing into seabeds and taking pictures instead of physical cores they analysed images of the vertical sediment profile in a technique that came to be known as spi this technique advanced in subsequent decades through a number of mechanical improvements and digital imaging and analysis technology spi is now a wellestablished approach accepted as standard practice in several parts of the world though its wider adoption has been hampered partly because of equipment cost deployment and interpretation difficulties it has also suffered some paradigm setbacks the amount of information that a person can extract from imagery in general is not easily and repeatedly reduced to quantifiable and interpretable values but see pech et al 2004 tkachenko 2005 sulston and ferry 2002 wrote about this difficulty in relation to the study of the human genome electron microscope images of their model organism caenorhabditis elegans carried a lot of information but were ignored by many scientists because they were not readily quantified yet that pictorial information ultimately resulted in a deep and quantifiable understanding of underlying principles and mechanisms in the same way spi has been used successfully by focusing on the integration of visual data and a few objectively quantifiable parameters in site reconnaissance and monitoring conventional diving is limited to shallow waters remotely sampling deeper sediments of high water content is often unreliable due"}, {"text": "1942 it now had a usable range of approximately 40 km conical scan was used for fine accuracy the iff antenna was now fitted in the center of the dish rather than on the sides better instruments were fitted and generally it was the best of the small wurzburgfumg 65 wurzburg riesegiant the electronics of the d model wurzburg combined with a 7meter dish to improve resolution and range range approx 70 km version e was a modified unit to fit on railroad flatcars to produce a mobile flak radar system version g had the 24meter antenna and electronics from a freya installed the antenna dipoles were inside the reflector the reason for this was that the allies were flying very high recon flights which were above the maximum height of the freya the standard wurzburg rieses 50 cm beam was too narrow to find them directly by combining the two systems the freya could set the wurzburg riese onto the target fumg 63 mainz the mainz introduced in 1941 was a development from the wurzburg with its 3meter solid metal reflector mounted on top of the same type of control car as used by the \u2018 kurmark \u2019 its range was 25 \u2013 35 km with an accuracy of \u00b110 \u2013 20 meters azimuth 01 degrees and elevation \u00b10305 degrees only 51 units were produced before being superseded by the \u2018 mannheim \u2019 fumg 64 mannheim the mannheim was an advanced development from the \u2018 mainz \u2019 it also had a 3meter reflector which was now made from a lattice framework covered in a fine mesh this was fixed to the front of a control cabin and the whole apparatus was rotated electrically its range was 25 \u2013 35 km with an accuracy of \u00b110 \u2013 15 meters azimuth and elevation accuracy of \u00b1015 degrees though accurate enough to control flak guns it was not deployed in large numbers this was due to its cost time and materials to manufacture was about three times that of a wurzburg d fumg 75 mannheim riese just as the wurzburgs performance was greatly improved when fitted with a 7meter reflector so was the mannheims and the result called a mannheim riese giant mannheim there was an optical device for the initial visual acquisition of the target with its narrow beam it was relatively immune from \u2018 window \u2019 its accuracy and automatic tracking enabled it to be used in antiaircraft missile research to track and control the missiles in flight only a handful were manufactured fumg 68 ansbach there was a need for a mobile radar with the range and accuracy of the \u2018 mannheim \u2019 the result in 1944 was the ansbach it"}], "pipeline_tag": "text-classification", "inference": true, "model-index": [{"name": "SetFit with sentence-transformers/multi-qa-mpnet-base-cos-v1", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "Unknown", "type": "unknown", "split": "test"}, "metrics": [{"type": "accuracy", "value": 0.6763469621704241, "name": "Accuracy"}]}]}]} | udrearobert999/multi-qa-mpnet-base-cos-v1-scon-poc | null | [
"setfit",
"safetensors",
"mpnet",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:sentence-transformers/multi-qa-mpnet-base-cos-v1",
"model-index",
"region:us"
] | null | 2024-05-03T18:32:01+00:00 | [
"2209.11055"
] | [] | TAGS
#setfit #safetensors #mpnet #sentence-transformers #text-classification #generated_from_setfit_trainer #arxiv-2209.11055 #base_model-sentence-transformers/multi-qa-mpnet-base-cos-v1 #model-index #region-us
| SetFit with sentence-transformers/multi-qa-mpnet-base-cos-v1
============================================================
This is a SetFit model that can be used for Text Classification. This SetFit model uses sentence-transformers/multi-qa-mpnet-base-cos-v1 as the Sentence Transformer embedding model. A SetFitHead instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a Sentence Transformer with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
Model Details
-------------
### Model Description
* Model Type: SetFit
* Sentence Transformer body: sentence-transformers/multi-qa-mpnet-base-cos-v1
* Classification head: a SetFitHead instance
* Maximum Sequence Length: 512 tokens
* Number of Classes: 43 classes
### Model Sources
* Repository: SetFit on GitHub
* Paper: Efficient Few-Shot Learning Without Prompts
* Blogpost: SetFit: Efficient Few-Shot Learning Without Prompts
### Model Labels
Evaluation
----------
### Metrics
Uses
----
### Direct Use for Inference
First install the SetFit library:
Then you can load this model and run inference.
Training Details
----------------
### Training Set Metrics
### Training Hyperparameters
* batch\_size: (16, 16)
* num\_epochs: (4, 8)
* max\_steps: -1
* sampling\_strategy: oversampling
* num\_iterations: 20
* body\_learning\_rate: (2e-05, 0.01)
* head\_learning\_rate: 0.01
* loss: CosineSimilarityLoss
* distance\_metric: cosine\_distance
* margin: 0.25
* end\_to\_end: False
* use\_amp: False
* warmup\_proportion: 0.1
* max\_length: 512
* seed: 42
* eval\_max\_steps: -1
* load\_best\_model\_at\_end: True
### Training Results
### Framework Versions
* Python: 3.10.12
* SetFit: 1.0.3
* Sentence Transformers: 2.7.0
* Transformers: 4.40.1
* PyTorch: 2.2.1+cu121
* Datasets: 2.19.0
* Tokenizers: 0.19.1
### BibTeX
| [
"### Model Description\n\n\n* Model Type: SetFit\n* Sentence Transformer body: sentence-transformers/multi-qa-mpnet-base-cos-v1\n* Classification head: a SetFitHead instance\n* Maximum Sequence Length: 512 tokens\n* Number of Classes: 43 classes",
"### Model Sources\n\n\n* Repository: SetFit on GitHub\n* Paper: Efficient Few-Shot Learning Without Prompts\n* Blogpost: SetFit: Efficient Few-Shot Learning Without Prompts",
"### Model Labels\n\n\n\nEvaluation\n----------",
"### Metrics\n\n\n\nUses\n----",
"### Direct Use for Inference\n\n\nFirst install the SetFit library:\n\n\nThen you can load this model and run inference.\n\n\nTraining Details\n----------------",
"### Training Set Metrics",
"### Training Hyperparameters\n\n\n* batch\\_size: (16, 16)\n* num\\_epochs: (4, 8)\n* max\\_steps: -1\n* sampling\\_strategy: oversampling\n* num\\_iterations: 20\n* body\\_learning\\_rate: (2e-05, 0.01)\n* head\\_learning\\_rate: 0.01\n* loss: CosineSimilarityLoss\n* distance\\_metric: cosine\\_distance\n* margin: 0.25\n* end\\_to\\_end: False\n* use\\_amp: False\n* warmup\\_proportion: 0.1\n* max\\_length: 512\n* seed: 42\n* eval\\_max\\_steps: -1\n* load\\_best\\_model\\_at\\_end: True",
"### Training Results",
"### Framework Versions\n\n\n* Python: 3.10.12\n* SetFit: 1.0.3\n* Sentence Transformers: 2.7.0\n* Transformers: 4.40.1\n* PyTorch: 2.2.1+cu121\n* Datasets: 2.19.0\n* Tokenizers: 0.19.1",
"### BibTeX"
] | [
"TAGS\n#setfit #safetensors #mpnet #sentence-transformers #text-classification #generated_from_setfit_trainer #arxiv-2209.11055 #base_model-sentence-transformers/multi-qa-mpnet-base-cos-v1 #model-index #region-us \n",
"### Model Description\n\n\n* Model Type: SetFit\n* Sentence Transformer body: sentence-transformers/multi-qa-mpnet-base-cos-v1\n* Classification head: a SetFitHead instance\n* Maximum Sequence Length: 512 tokens\n* Number of Classes: 43 classes",
"### Model Sources\n\n\n* Repository: SetFit on GitHub\n* Paper: Efficient Few-Shot Learning Without Prompts\n* Blogpost: SetFit: Efficient Few-Shot Learning Without Prompts",
"### Model Labels\n\n\n\nEvaluation\n----------",
"### Metrics\n\n\n\nUses\n----",
"### Direct Use for Inference\n\n\nFirst install the SetFit library:\n\n\nThen you can load this model and run inference.\n\n\nTraining Details\n----------------",
"### Training Set Metrics",
"### Training Hyperparameters\n\n\n* batch\\_size: (16, 16)\n* num\\_epochs: (4, 8)\n* max\\_steps: -1\n* sampling\\_strategy: oversampling\n* num\\_iterations: 20\n* body\\_learning\\_rate: (2e-05, 0.01)\n* head\\_learning\\_rate: 0.01\n* loss: CosineSimilarityLoss\n* distance\\_metric: cosine\\_distance\n* margin: 0.25\n* end\\_to\\_end: False\n* use\\_amp: False\n* warmup\\_proportion: 0.1\n* max\\_length: 512\n* seed: 42\n* eval\\_max\\_steps: -1\n* load\\_best\\_model\\_at\\_end: True",
"### Training Results",
"### Framework Versions\n\n\n* Python: 3.10.12\n* SetFit: 1.0.3\n* Sentence Transformers: 2.7.0\n* Transformers: 4.40.1\n* PyTorch: 2.2.1+cu121\n* Datasets: 2.19.0\n* Tokenizers: 0.19.1",
"### BibTeX"
] | [
72,
62,
42,
16,
10,
43,
7,
184,
5,
75,
6
] | [
"TAGS\n#setfit #safetensors #mpnet #sentence-transformers #text-classification #generated_from_setfit_trainer #arxiv-2209.11055 #base_model-sentence-transformers/multi-qa-mpnet-base-cos-v1 #model-index #region-us \n### Model Description\n\n\n* Model Type: SetFit\n* Sentence Transformer body: sentence-transformers/multi-qa-mpnet-base-cos-v1\n* Classification head: a SetFitHead instance\n* Maximum Sequence Length: 512 tokens\n* Number of Classes: 43 classes### Model Sources\n\n\n* Repository: SetFit on GitHub\n* Paper: Efficient Few-Shot Learning Without Prompts\n* Blogpost: SetFit: Efficient Few-Shot Learning Without Prompts### Model Labels\n\n\n\nEvaluation\n----------### Metrics\n\n\n\nUses\n----### Direct Use for Inference\n\n\nFirst install the SetFit library:\n\n\nThen you can load this model and run inference.\n\n\nTraining Details\n----------------### Training Set Metrics### Training Hyperparameters\n\n\n* batch\\_size: (16, 16)\n* num\\_epochs: (4, 8)\n* max\\_steps: -1\n* sampling\\_strategy: oversampling\n* num\\_iterations: 20\n* body\\_learning\\_rate: (2e-05, 0.01)\n* head\\_learning\\_rate: 0.01\n* loss: CosineSimilarityLoss\n* distance\\_metric: cosine\\_distance\n* margin: 0.25\n* end\\_to\\_end: False\n* use\\_amp: False\n* warmup\\_proportion: 0.1\n* max\\_length: 512\n* seed: 42\n* eval\\_max\\_steps: -1\n* load\\_best\\_model\\_at\\_end: True### Training Results### Framework Versions\n\n\n* Python: 3.10.12\n* SetFit: 1.0.3\n* Sentence Transformers: 2.7.0\n* Transformers: 4.40.1\n* PyTorch: 2.2.1+cu121\n* Datasets: 2.19.0\n* Tokenizers: 0.19.1### BibTeX"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | ferrazzipietro/LS_Llama-2-7b-hf_adapters_en.layer1_NoQuant_16_64_0.01_2_5e-05 | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T18:32:08+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
26,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | golf2248/rkys4yc | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-03T18:33:24+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
47,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K36me3-seqsight_4096_512_15M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_15M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_15M) on the [mahdibaghbanzadeh/GUE_EMP_H3K36me3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K36me3) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5029
- F1 Score: 0.7762
- Accuracy: 0.7778
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5925 | 0.92 | 200 | 0.5552 | 0.7195 | 0.7236 |
| 0.5436 | 1.83 | 400 | 0.5341 | 0.7429 | 0.7443 |
| 0.531 | 2.75 | 600 | 0.5226 | 0.7509 | 0.7517 |
| 0.5247 | 3.67 | 800 | 0.5123 | 0.7600 | 0.7606 |
| 0.5093 | 4.59 | 1000 | 0.5074 | 0.7623 | 0.7643 |
| 0.4985 | 5.5 | 1200 | 0.5067 | 0.7612 | 0.7640 |
| 0.4985 | 6.42 | 1400 | 0.5035 | 0.7688 | 0.7704 |
| 0.4946 | 7.34 | 1600 | 0.5084 | 0.7594 | 0.7623 |
| 0.4906 | 8.26 | 1800 | 0.5094 | 0.7611 | 0.7640 |
| 0.4925 | 9.17 | 2000 | 0.5061 | 0.7593 | 0.7629 |
| 0.4885 | 10.09 | 2200 | 0.5141 | 0.7530 | 0.7580 |
| 0.4865 | 11.01 | 2400 | 0.5061 | 0.7623 | 0.7658 |
| 0.488 | 11.93 | 2600 | 0.5001 | 0.7615 | 0.7646 |
| 0.4868 | 12.84 | 2800 | 0.5031 | 0.7613 | 0.7652 |
| 0.4834 | 13.76 | 3000 | 0.5101 | 0.7601 | 0.7643 |
| 0.4816 | 14.68 | 3200 | 0.4980 | 0.7667 | 0.7689 |
| 0.4851 | 15.6 | 3400 | 0.5030 | 0.7601 | 0.7640 |
| 0.4807 | 16.51 | 3600 | 0.5046 | 0.7621 | 0.7649 |
| 0.4798 | 17.43 | 3800 | 0.4932 | 0.7682 | 0.7704 |
| 0.4798 | 18.35 | 4000 | 0.5010 | 0.7638 | 0.7672 |
| 0.4769 | 19.27 | 4200 | 0.5125 | 0.7608 | 0.7646 |
| 0.4782 | 20.18 | 4400 | 0.5100 | 0.7577 | 0.7623 |
| 0.4746 | 21.1 | 4600 | 0.5057 | 0.7631 | 0.7663 |
| 0.4758 | 22.02 | 4800 | 0.5102 | 0.7596 | 0.7632 |
| 0.4757 | 22.94 | 5000 | 0.5010 | 0.7621 | 0.7655 |
| 0.476 | 23.85 | 5200 | 0.5025 | 0.7639 | 0.7672 |
| 0.472 | 24.77 | 5400 | 0.5186 | 0.7555 | 0.7603 |
| 0.4749 | 25.69 | 5600 | 0.5062 | 0.7604 | 0.7640 |
| 0.4726 | 26.61 | 5800 | 0.5013 | 0.7665 | 0.7695 |
| 0.4733 | 27.52 | 6000 | 0.5136 | 0.7582 | 0.7626 |
| 0.4714 | 28.44 | 6200 | 0.5021 | 0.7652 | 0.7683 |
| 0.471 | 29.36 | 6400 | 0.5076 | 0.7573 | 0.7618 |
| 0.4708 | 30.28 | 6600 | 0.5061 | 0.7585 | 0.7623 |
| 0.4729 | 31.19 | 6800 | 0.5006 | 0.7652 | 0.7681 |
| 0.4721 | 32.11 | 7000 | 0.5077 | 0.7572 | 0.7615 |
| 0.4695 | 33.03 | 7200 | 0.5081 | 0.7597 | 0.7635 |
| 0.4683 | 33.94 | 7400 | 0.5081 | 0.7629 | 0.7663 |
| 0.4694 | 34.86 | 7600 | 0.5069 | 0.7617 | 0.7652 |
| 0.4689 | 35.78 | 7800 | 0.5048 | 0.7610 | 0.7646 |
| 0.4702 | 36.7 | 8000 | 0.5035 | 0.7623 | 0.7658 |
| 0.469 | 37.61 | 8200 | 0.5106 | 0.7566 | 0.7612 |
| 0.4698 | 38.53 | 8400 | 0.5019 | 0.7646 | 0.7678 |
| 0.4686 | 39.45 | 8600 | 0.5063 | 0.7604 | 0.7640 |
| 0.4653 | 40.37 | 8800 | 0.5104 | 0.7592 | 0.7632 |
| 0.4712 | 41.28 | 9000 | 0.5051 | 0.7587 | 0.7626 |
| 0.4652 | 42.2 | 9200 | 0.5050 | 0.7633 | 0.7666 |
| 0.4655 | 43.12 | 9400 | 0.5056 | 0.7623 | 0.7658 |
| 0.4721 | 44.04 | 9600 | 0.5049 | 0.7606 | 0.7643 |
| 0.4669 | 44.95 | 9800 | 0.5068 | 0.7611 | 0.7649 |
| 0.4667 | 45.87 | 10000 | 0.5062 | 0.7608 | 0.7646 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_15M", "model-index": [{"name": "GUE_EMP_H3K36me3-seqsight_4096_512_15M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K36me3-seqsight_4096_512_15M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_15M",
"region:us"
] | null | 2024-05-03T18:33:26+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us
| GUE\_EMP\_H3K36me3-seqsight\_4096\_512\_15M-L1\_f
=================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_15M on the mahdibaghbanzadeh/GUE\_EMP\_H3K36me3 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5029
* F1 Score: 0.7762
* Accuracy: 0.7778
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
43,
100,
5,
52
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K36me3-seqsight_4096_512_15M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_15M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_15M) on the [mahdibaghbanzadeh/GUE_EMP_H3K36me3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K36me3) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4808
- F1 Score: 0.7894
- Accuracy: 0.7916
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.575 | 0.92 | 200 | 0.5422 | 0.7341 | 0.7374 |
| 0.5247 | 1.83 | 400 | 0.5117 | 0.7605 | 0.7623 |
| 0.5026 | 2.75 | 600 | 0.5049 | 0.7680 | 0.7698 |
| 0.5026 | 3.67 | 800 | 0.5005 | 0.7662 | 0.7681 |
| 0.4916 | 4.59 | 1000 | 0.4974 | 0.7679 | 0.7701 |
| 0.4806 | 5.5 | 1200 | 0.4994 | 0.7659 | 0.7695 |
| 0.4827 | 6.42 | 1400 | 0.4932 | 0.7724 | 0.7738 |
| 0.478 | 7.34 | 1600 | 0.5105 | 0.7572 | 0.7615 |
| 0.4738 | 8.26 | 1800 | 0.4954 | 0.7707 | 0.7726 |
| 0.4763 | 9.17 | 2000 | 0.4943 | 0.7661 | 0.7692 |
| 0.4718 | 10.09 | 2200 | 0.5155 | 0.7478 | 0.7546 |
| 0.4674 | 11.01 | 2400 | 0.4903 | 0.7698 | 0.7729 |
| 0.4687 | 11.93 | 2600 | 0.4832 | 0.7780 | 0.7795 |
| 0.4665 | 12.84 | 2800 | 0.4904 | 0.7659 | 0.7692 |
| 0.4624 | 13.76 | 3000 | 0.5041 | 0.7613 | 0.7666 |
| 0.4594 | 14.68 | 3200 | 0.4892 | 0.7742 | 0.7767 |
| 0.4622 | 15.6 | 3400 | 0.4821 | 0.7781 | 0.7804 |
| 0.4586 | 16.51 | 3600 | 0.5021 | 0.7640 | 0.7675 |
| 0.4557 | 17.43 | 3800 | 0.4893 | 0.7711 | 0.7738 |
| 0.4556 | 18.35 | 4000 | 0.4861 | 0.7774 | 0.7798 |
| 0.4549 | 19.27 | 4200 | 0.4998 | 0.7716 | 0.7747 |
| 0.4517 | 20.18 | 4400 | 0.5009 | 0.7677 | 0.7715 |
| 0.4485 | 21.1 | 4600 | 0.4963 | 0.7768 | 0.7792 |
| 0.4506 | 22.02 | 4800 | 0.4967 | 0.7742 | 0.7769 |
| 0.4493 | 22.94 | 5000 | 0.4865 | 0.7758 | 0.7781 |
| 0.4481 | 23.85 | 5200 | 0.4881 | 0.7735 | 0.7761 |
| 0.4433 | 24.77 | 5400 | 0.5136 | 0.7576 | 0.7629 |
| 0.4455 | 25.69 | 5600 | 0.4978 | 0.7679 | 0.7715 |
| 0.4458 | 26.61 | 5800 | 0.4869 | 0.7801 | 0.7821 |
| 0.4456 | 27.52 | 6000 | 0.5017 | 0.7726 | 0.7758 |
| 0.4402 | 28.44 | 6200 | 0.4870 | 0.7778 | 0.7798 |
| 0.4421 | 29.36 | 6400 | 0.4884 | 0.7744 | 0.7772 |
| 0.4396 | 30.28 | 6600 | 0.4924 | 0.7730 | 0.7761 |
| 0.443 | 31.19 | 6800 | 0.4860 | 0.7786 | 0.7804 |
| 0.441 | 32.11 | 7000 | 0.4958 | 0.7687 | 0.7724 |
| 0.4381 | 33.03 | 7200 | 0.4938 | 0.7731 | 0.7761 |
| 0.4364 | 33.94 | 7400 | 0.4927 | 0.7776 | 0.7801 |
| 0.4381 | 34.86 | 7600 | 0.4952 | 0.7723 | 0.7755 |
| 0.4354 | 35.78 | 7800 | 0.4897 | 0.7759 | 0.7781 |
| 0.4378 | 36.7 | 8000 | 0.4896 | 0.7757 | 0.7784 |
| 0.4357 | 37.61 | 8200 | 0.4956 | 0.7698 | 0.7738 |
| 0.4367 | 38.53 | 8400 | 0.4893 | 0.7750 | 0.7775 |
| 0.4339 | 39.45 | 8600 | 0.4943 | 0.7738 | 0.7769 |
| 0.4324 | 40.37 | 8800 | 0.4955 | 0.7739 | 0.7767 |
| 0.4374 | 41.28 | 9000 | 0.4899 | 0.7723 | 0.7752 |
| 0.4287 | 42.2 | 9200 | 0.4913 | 0.7755 | 0.7781 |
| 0.4306 | 43.12 | 9400 | 0.4926 | 0.7761 | 0.7787 |
| 0.4372 | 44.04 | 9600 | 0.4897 | 0.7734 | 0.7761 |
| 0.4299 | 44.95 | 9800 | 0.4928 | 0.7738 | 0.7767 |
| 0.4331 | 45.87 | 10000 | 0.4919 | 0.7736 | 0.7764 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_15M", "model-index": [{"name": "GUE_EMP_H3K36me3-seqsight_4096_512_15M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K36me3-seqsight_4096_512_15M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_15M",
"region:us"
] | null | 2024-05-03T18:36:31+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us
| GUE\_EMP\_H3K36me3-seqsight\_4096\_512\_15M-L8\_f
=================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_15M on the mahdibaghbanzadeh/GUE\_EMP\_H3K36me3 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4808
* F1 Score: 0.7894
* Accuracy: 0.7916
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
43,
100,
5,
52
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K36me3-seqsight_4096_512_15M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_15M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_15M) on the [mahdibaghbanzadeh/GUE_EMP_H3K36me3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K36me3) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4891
- F1 Score: 0.7820
- Accuracy: 0.7833
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5624 | 0.92 | 200 | 0.5315 | 0.7429 | 0.7457 |
| 0.5092 | 1.83 | 400 | 0.5086 | 0.7604 | 0.7635 |
| 0.4926 | 2.75 | 600 | 0.4951 | 0.7688 | 0.7706 |
| 0.4923 | 3.67 | 800 | 0.4915 | 0.7706 | 0.7726 |
| 0.4794 | 4.59 | 1000 | 0.4941 | 0.7755 | 0.7778 |
| 0.467 | 5.5 | 1200 | 0.4939 | 0.7712 | 0.7749 |
| 0.4683 | 6.42 | 1400 | 0.4812 | 0.7819 | 0.7830 |
| 0.4621 | 7.34 | 1600 | 0.5019 | 0.7636 | 0.7678 |
| 0.4547 | 8.26 | 1800 | 0.4850 | 0.7859 | 0.7870 |
| 0.4576 | 9.17 | 2000 | 0.4857 | 0.7738 | 0.7761 |
| 0.4515 | 10.09 | 2200 | 0.5165 | 0.7581 | 0.7643 |
| 0.4436 | 11.01 | 2400 | 0.4829 | 0.7715 | 0.7744 |
| 0.4425 | 11.93 | 2600 | 0.4783 | 0.7808 | 0.7815 |
| 0.4391 | 12.84 | 2800 | 0.4849 | 0.7727 | 0.7758 |
| 0.4331 | 13.76 | 3000 | 0.5034 | 0.7614 | 0.7678 |
| 0.4284 | 14.68 | 3200 | 0.4865 | 0.7789 | 0.7818 |
| 0.4293 | 15.6 | 3400 | 0.4772 | 0.7824 | 0.7850 |
| 0.4232 | 16.51 | 3600 | 0.5118 | 0.7737 | 0.7781 |
| 0.4211 | 17.43 | 3800 | 0.4997 | 0.7711 | 0.7749 |
| 0.4185 | 18.35 | 4000 | 0.4787 | 0.7833 | 0.7850 |
| 0.4162 | 19.27 | 4200 | 0.5059 | 0.7715 | 0.7752 |
| 0.4116 | 20.18 | 4400 | 0.5054 | 0.7720 | 0.7755 |
| 0.4065 | 21.1 | 4600 | 0.4930 | 0.7818 | 0.7841 |
| 0.4081 | 22.02 | 4800 | 0.4930 | 0.7833 | 0.7858 |
| 0.4038 | 22.94 | 5000 | 0.4915 | 0.7797 | 0.7824 |
| 0.4004 | 23.85 | 5200 | 0.4926 | 0.7749 | 0.7781 |
| 0.3933 | 24.77 | 5400 | 0.5156 | 0.7753 | 0.7795 |
| 0.3945 | 25.69 | 5600 | 0.5152 | 0.7709 | 0.7752 |
| 0.3947 | 26.61 | 5800 | 0.4924 | 0.7850 | 0.7870 |
| 0.3922 | 27.52 | 6000 | 0.5103 | 0.7811 | 0.7844 |
| 0.3859 | 28.44 | 6200 | 0.4932 | 0.7807 | 0.7830 |
| 0.3868 | 29.36 | 6400 | 0.4969 | 0.7758 | 0.7790 |
| 0.3842 | 30.28 | 6600 | 0.4957 | 0.7766 | 0.7801 |
| 0.3849 | 31.19 | 6800 | 0.4918 | 0.7820 | 0.7841 |
| 0.3793 | 32.11 | 7000 | 0.5041 | 0.7780 | 0.7810 |
| 0.3794 | 33.03 | 7200 | 0.5104 | 0.7737 | 0.7772 |
| 0.3764 | 33.94 | 7400 | 0.5096 | 0.7749 | 0.7784 |
| 0.3752 | 34.86 | 7600 | 0.5092 | 0.7735 | 0.7769 |
| 0.3699 | 35.78 | 7800 | 0.5049 | 0.7775 | 0.7801 |
| 0.3756 | 36.7 | 8000 | 0.5050 | 0.7772 | 0.7801 |
| 0.3692 | 37.61 | 8200 | 0.5162 | 0.7688 | 0.7732 |
| 0.3694 | 38.53 | 8400 | 0.5204 | 0.7727 | 0.7764 |
| 0.3694 | 39.45 | 8600 | 0.5092 | 0.7724 | 0.7758 |
| 0.3656 | 40.37 | 8800 | 0.5209 | 0.7723 | 0.7758 |
| 0.3726 | 41.28 | 9000 | 0.5062 | 0.7722 | 0.7755 |
| 0.3593 | 42.2 | 9200 | 0.5113 | 0.7753 | 0.7784 |
| 0.3619 | 43.12 | 9400 | 0.5140 | 0.7777 | 0.7807 |
| 0.3665 | 44.04 | 9600 | 0.5099 | 0.7738 | 0.7772 |
| 0.3613 | 44.95 | 9800 | 0.5139 | 0.7722 | 0.7758 |
| 0.3632 | 45.87 | 10000 | 0.5137 | 0.7745 | 0.7778 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_15M", "model-index": [{"name": "GUE_EMP_H3K36me3-seqsight_4096_512_15M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K36me3-seqsight_4096_512_15M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_15M",
"region:us"
] | null | 2024-05-03T18:36:55+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us
| GUE\_EMP\_H3K36me3-seqsight\_4096\_512\_15M-L32\_f
==================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_15M on the mahdibaghbanzadeh/GUE\_EMP\_H3K36me3 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4891
* F1 Score: 0.7820
* Accuracy: 0.7833
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
43,
100,
5,
52
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | Fawazzx/Saul-instruct-1keg | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T18:38:03+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
26,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | ferrazzipietro/LS_Llama-2-7b-hf_adapters_en.layer1_NoQuant_16_64_0.01_2_0.0002 | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T18:38:45+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
26,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
koSoLAR_2way_3000_10epoch_revised - bnb 4bits
- Model creator: https://huggingface.co/Yotto3108/
- Original model: https://huggingface.co/Yotto3108/koSoLAR_2way_3000_10epoch_revised/
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {} | RichardErkhov/Yotto3108_-_koSoLAR_2way_3000_10epoch_revised-4bits | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-05-03T18:40:07+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
| Quantization made by Richard Erkhov.
Github
Discord
Request more models
koSoLAR_2way_3000_10epoch_revised - bnb 4bits
- Model creator: URL
- Original model: URL
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
48,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7168
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.2684 | 1.0 | 1013 | 2.7165 |
| 3.251 | 2.0 | 2026 | 2.7168 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["summarization", "generated_from_trainer"], "base_model": "google/mt5-small", "model-index": [{"name": "mt5-small-finetuned", "results": []}]} | midnightGlow/mt5-small-finetuned | null | [
"transformers",
"tensorboard",
"safetensors",
"mt5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-03T18:44:21+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #mt5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| mt5-small-finetuned
===================
This model is a fine-tuned version of google/mt5-small on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 2.7168
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5.6e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 2
### Training results
### Framework versions
* Transformers 4.40.1
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5.6e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #mt5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5.6e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
39,
103,
5,
44
] | [
"TAGS\n#transformers #tensorboard #safetensors #mt5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5.6e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2### Training results### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | ferrazzipietro/LS_Llama-2-7b-hf_adapters_en.layer1_NoQuant_16_64_0.01_4_5e-05 | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T18:45:15+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
26,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_mouse_0-seqsight_4096_512_15M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_15M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_15M) on the [mahdibaghbanzadeh/GUE_mouse_0](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_0) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5707
- F1 Score: 0.7151
- Accuracy: 0.7160
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.6765 | 3.92 | 200 | 0.6361 | 0.6200 | 0.6284 |
| 0.6229 | 7.84 | 400 | 0.6058 | 0.6717 | 0.6728 |
| 0.6034 | 11.76 | 600 | 0.5859 | 0.6765 | 0.6765 |
| 0.5899 | 15.69 | 800 | 0.5713 | 0.7025 | 0.7037 |
| 0.5718 | 19.61 | 1000 | 0.5626 | 0.6963 | 0.6963 |
| 0.5673 | 23.53 | 1200 | 0.5653 | 0.6900 | 0.6963 |
| 0.5579 | 27.45 | 1400 | 0.5536 | 0.7049 | 0.7049 |
| 0.5476 | 31.37 | 1600 | 0.5476 | 0.7052 | 0.7062 |
| 0.5428 | 35.29 | 1800 | 0.5455 | 0.7104 | 0.7111 |
| 0.535 | 39.22 | 2000 | 0.5467 | 0.7085 | 0.7086 |
| 0.532 | 43.14 | 2200 | 0.5443 | 0.7017 | 0.7025 |
| 0.5279 | 47.06 | 2400 | 0.5421 | 0.7137 | 0.7160 |
| 0.5241 | 50.98 | 2600 | 0.5395 | 0.7153 | 0.7160 |
| 0.52 | 54.9 | 2800 | 0.5452 | 0.7162 | 0.7210 |
| 0.5158 | 58.82 | 3000 | 0.5400 | 0.7085 | 0.7123 |
| 0.513 | 62.75 | 3200 | 0.5329 | 0.7241 | 0.7259 |
| 0.5113 | 66.67 | 3400 | 0.5363 | 0.7253 | 0.7272 |
| 0.5092 | 70.59 | 3600 | 0.5329 | 0.7278 | 0.7284 |
| 0.5045 | 74.51 | 3800 | 0.5305 | 0.7253 | 0.7259 |
| 0.5058 | 78.43 | 4000 | 0.5295 | 0.7302 | 0.7321 |
| 0.4978 | 82.35 | 4200 | 0.5336 | 0.7298 | 0.7321 |
| 0.498 | 86.27 | 4400 | 0.5355 | 0.7184 | 0.7185 |
| 0.4959 | 90.2 | 4600 | 0.5336 | 0.7277 | 0.7296 |
| 0.4947 | 94.12 | 4800 | 0.5316 | 0.7292 | 0.7296 |
| 0.4884 | 98.04 | 5000 | 0.5306 | 0.7308 | 0.7321 |
| 0.4893 | 101.96 | 5200 | 0.5265 | 0.7335 | 0.7346 |
| 0.4868 | 105.88 | 5400 | 0.5248 | 0.7400 | 0.7420 |
| 0.4861 | 109.8 | 5600 | 0.5272 | 0.7385 | 0.7395 |
| 0.4846 | 113.73 | 5800 | 0.5285 | 0.7322 | 0.7346 |
| 0.4813 | 117.65 | 6000 | 0.5305 | 0.7384 | 0.7395 |
| 0.4837 | 121.57 | 6200 | 0.5289 | 0.7418 | 0.7432 |
| 0.4795 | 125.49 | 6400 | 0.5256 | 0.7402 | 0.7407 |
| 0.4801 | 129.41 | 6600 | 0.5270 | 0.7386 | 0.7395 |
| 0.4735 | 133.33 | 6800 | 0.5310 | 0.7318 | 0.7346 |
| 0.4764 | 137.25 | 7000 | 0.5273 | 0.7334 | 0.7346 |
| 0.474 | 141.18 | 7200 | 0.5282 | 0.7371 | 0.7383 |
| 0.4719 | 145.1 | 7400 | 0.5313 | 0.7408 | 0.7420 |
| 0.4742 | 149.02 | 7600 | 0.5277 | 0.7357 | 0.7370 |
| 0.4728 | 152.94 | 7800 | 0.5264 | 0.7369 | 0.7383 |
| 0.4709 | 156.86 | 8000 | 0.5283 | 0.7337 | 0.7358 |
| 0.4692 | 160.78 | 8200 | 0.5275 | 0.7371 | 0.7383 |
| 0.4701 | 164.71 | 8400 | 0.5276 | 0.7337 | 0.7346 |
| 0.4684 | 168.63 | 8600 | 0.5272 | 0.7292 | 0.7309 |
| 0.4672 | 172.55 | 8800 | 0.5285 | 0.7342 | 0.7358 |
| 0.4693 | 176.47 | 9000 | 0.5272 | 0.7325 | 0.7346 |
| 0.4679 | 180.39 | 9200 | 0.5287 | 0.7344 | 0.7358 |
| 0.4696 | 184.31 | 9400 | 0.5276 | 0.7387 | 0.7395 |
| 0.4611 | 188.24 | 9600 | 0.5299 | 0.7375 | 0.7383 |
| 0.4671 | 192.16 | 9800 | 0.5287 | 0.7398 | 0.7407 |
| 0.4638 | 196.08 | 10000 | 0.5287 | 0.7359 | 0.7370 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_15M", "model-index": [{"name": "GUE_mouse_0-seqsight_4096_512_15M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_mouse_0-seqsight_4096_512_15M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_15M",
"region:us"
] | null | 2024-05-03T18:46:03+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us
| GUE\_mouse\_0-seqsight\_4096\_512\_15M-L1\_f
============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_15M on the mahdibaghbanzadeh/GUE\_mouse\_0 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5707
* F1 Score: 0.7151
* Accuracy: 0.7160
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
43,
100,
5,
52
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_mouse_0-seqsight_4096_512_15M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_15M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_15M) on the [mahdibaghbanzadeh/GUE_mouse_0](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_0) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6323
- F1 Score: 0.7318
- Accuracy: 0.7321
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.651 | 3.92 | 200 | 0.5972 | 0.6787 | 0.6790 |
| 0.5801 | 7.84 | 400 | 0.5669 | 0.6982 | 0.6988 |
| 0.5516 | 11.76 | 600 | 0.5466 | 0.7186 | 0.7185 |
| 0.5386 | 15.69 | 800 | 0.5382 | 0.7266 | 0.7284 |
| 0.5201 | 19.61 | 1000 | 0.5429 | 0.7188 | 0.7198 |
| 0.5169 | 23.53 | 1200 | 0.5400 | 0.7217 | 0.7259 |
| 0.5046 | 27.45 | 1400 | 0.5362 | 0.7276 | 0.7284 |
| 0.4938 | 31.37 | 1600 | 0.5407 | 0.7370 | 0.7370 |
| 0.4891 | 35.29 | 1800 | 0.5378 | 0.7333 | 0.7333 |
| 0.4813 | 39.22 | 2000 | 0.5337 | 0.7441 | 0.7444 |
| 0.4757 | 43.14 | 2200 | 0.5407 | 0.7404 | 0.7407 |
| 0.4728 | 47.06 | 2400 | 0.5373 | 0.7439 | 0.7444 |
| 0.4668 | 50.98 | 2600 | 0.5404 | 0.7408 | 0.7407 |
| 0.4599 | 54.9 | 2800 | 0.5376 | 0.7403 | 0.7407 |
| 0.4563 | 58.82 | 3000 | 0.5309 | 0.7288 | 0.7309 |
| 0.4519 | 62.75 | 3200 | 0.5325 | 0.7328 | 0.7333 |
| 0.4465 | 66.67 | 3400 | 0.5407 | 0.7323 | 0.7333 |
| 0.4412 | 70.59 | 3600 | 0.5467 | 0.7391 | 0.7395 |
| 0.4341 | 74.51 | 3800 | 0.5391 | 0.7301 | 0.7309 |
| 0.4353 | 78.43 | 4000 | 0.5487 | 0.7323 | 0.7333 |
| 0.4248 | 82.35 | 4200 | 0.5502 | 0.7390 | 0.7407 |
| 0.4206 | 86.27 | 4400 | 0.5598 | 0.7333 | 0.7333 |
| 0.4173 | 90.2 | 4600 | 0.5601 | 0.7343 | 0.7346 |
| 0.4112 | 94.12 | 4800 | 0.5628 | 0.7354 | 0.7358 |
| 0.4058 | 98.04 | 5000 | 0.5624 | 0.7271 | 0.7284 |
| 0.4066 | 101.96 | 5200 | 0.5537 | 0.7344 | 0.7346 |
| 0.4001 | 105.88 | 5400 | 0.5578 | 0.7417 | 0.7420 |
| 0.3969 | 109.8 | 5600 | 0.5694 | 0.7358 | 0.7358 |
| 0.39 | 113.73 | 5800 | 0.5698 | 0.7318 | 0.7321 |
| 0.3879 | 117.65 | 6000 | 0.5685 | 0.7431 | 0.7432 |
| 0.3854 | 121.57 | 6200 | 0.5707 | 0.7481 | 0.7481 |
| 0.3806 | 125.49 | 6400 | 0.5616 | 0.7431 | 0.7432 |
| 0.3819 | 129.41 | 6600 | 0.5663 | 0.7358 | 0.7358 |
| 0.3724 | 133.33 | 6800 | 0.5750 | 0.7399 | 0.7407 |
| 0.3725 | 137.25 | 7000 | 0.5767 | 0.7393 | 0.7395 |
| 0.3697 | 141.18 | 7200 | 0.5807 | 0.7420 | 0.7420 |
| 0.3659 | 145.1 | 7400 | 0.5816 | 0.7431 | 0.7432 |
| 0.3633 | 149.02 | 7600 | 0.5830 | 0.7369 | 0.7370 |
| 0.365 | 152.94 | 7800 | 0.5800 | 0.7382 | 0.7383 |
| 0.3595 | 156.86 | 8000 | 0.5868 | 0.7465 | 0.7469 |
| 0.3577 | 160.78 | 8200 | 0.5844 | 0.7431 | 0.7432 |
| 0.3556 | 164.71 | 8400 | 0.5883 | 0.7420 | 0.7420 |
| 0.3552 | 168.63 | 8600 | 0.5881 | 0.7358 | 0.7358 |
| 0.3502 | 172.55 | 8800 | 0.5883 | 0.7393 | 0.7395 |
| 0.3525 | 176.47 | 9000 | 0.5887 | 0.7431 | 0.7432 |
| 0.3499 | 180.39 | 9200 | 0.5929 | 0.7393 | 0.7395 |
| 0.3496 | 184.31 | 9400 | 0.5946 | 0.7407 | 0.7407 |
| 0.3465 | 188.24 | 9600 | 0.6016 | 0.7395 | 0.7395 |
| 0.3496 | 192.16 | 9800 | 0.5957 | 0.7419 | 0.7420 |
| 0.3494 | 196.08 | 10000 | 0.5947 | 0.7419 | 0.7420 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_15M", "model-index": [{"name": "GUE_mouse_0-seqsight_4096_512_15M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_mouse_0-seqsight_4096_512_15M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_15M",
"region:us"
] | null | 2024-05-03T18:46:14+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us
| GUE\_mouse\_0-seqsight\_4096\_512\_15M-L8\_f
============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_15M on the mahdibaghbanzadeh/GUE\_mouse\_0 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.6323
* F1 Score: 0.7318
* Accuracy: 0.7321
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
43,
100,
5,
52
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_mouse_0-seqsight_4096_512_15M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_15M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_15M) on the [mahdibaghbanzadeh/GUE_mouse_0](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_0) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7142
- F1 Score: 0.7214
- Accuracy: 0.7222
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.6322 | 3.92 | 200 | 0.5711 | 0.6815 | 0.6815 |
| 0.5507 | 7.84 | 400 | 0.5461 | 0.7235 | 0.7235 |
| 0.5244 | 11.76 | 600 | 0.5325 | 0.7264 | 0.7272 |
| 0.5054 | 15.69 | 800 | 0.5346 | 0.7204 | 0.7259 |
| 0.4822 | 19.61 | 1000 | 0.5386 | 0.7367 | 0.7370 |
| 0.467 | 23.53 | 1200 | 0.5403 | 0.7330 | 0.7358 |
| 0.4434 | 27.45 | 1400 | 0.5631 | 0.7432 | 0.7432 |
| 0.4241 | 31.37 | 1600 | 0.5869 | 0.7392 | 0.7395 |
| 0.4093 | 35.29 | 1800 | 0.5711 | 0.7445 | 0.7444 |
| 0.3862 | 39.22 | 2000 | 0.5681 | 0.7482 | 0.7481 |
| 0.374 | 43.14 | 2200 | 0.5797 | 0.7406 | 0.7407 |
| 0.3573 | 47.06 | 2400 | 0.5638 | 0.7466 | 0.7469 |
| 0.3401 | 50.98 | 2600 | 0.6463 | 0.7454 | 0.7457 |
| 0.3255 | 54.9 | 2800 | 0.5911 | 0.7417 | 0.7432 |
| 0.309 | 58.82 | 3000 | 0.6326 | 0.7432 | 0.7432 |
| 0.2962 | 62.75 | 3200 | 0.5930 | 0.7500 | 0.7506 |
| 0.2864 | 66.67 | 3400 | 0.6552 | 0.7493 | 0.7494 |
| 0.2716 | 70.59 | 3600 | 0.6994 | 0.7482 | 0.7481 |
| 0.2592 | 74.51 | 3800 | 0.6942 | 0.7370 | 0.7370 |
| 0.2536 | 78.43 | 4000 | 0.7337 | 0.7321 | 0.7321 |
| 0.2414 | 82.35 | 4200 | 0.7403 | 0.7309 | 0.7309 |
| 0.2305 | 86.27 | 4400 | 0.7666 | 0.7272 | 0.7272 |
| 0.2259 | 90.2 | 4600 | 0.7411 | 0.7334 | 0.7333 |
| 0.2119 | 94.12 | 4800 | 0.8041 | 0.7346 | 0.7346 |
| 0.2047 | 98.04 | 5000 | 0.8254 | 0.7370 | 0.7370 |
| 0.1937 | 101.96 | 5200 | 0.8313 | 0.7284 | 0.7284 |
| 0.1886 | 105.88 | 5400 | 0.8541 | 0.7222 | 0.7222 |
| 0.1842 | 109.8 | 5600 | 0.9007 | 0.7308 | 0.7309 |
| 0.1783 | 113.73 | 5800 | 0.8856 | 0.7332 | 0.7333 |
| 0.1748 | 117.65 | 6000 | 0.8749 | 0.7259 | 0.7259 |
| 0.1678 | 121.57 | 6200 | 0.9050 | 0.7368 | 0.7370 |
| 0.1614 | 125.49 | 6400 | 0.9158 | 0.7272 | 0.7272 |
| 0.1616 | 129.41 | 6600 | 0.9444 | 0.7305 | 0.7309 |
| 0.1558 | 133.33 | 6800 | 0.9417 | 0.7320 | 0.7321 |
| 0.1566 | 137.25 | 7000 | 0.9549 | 0.7258 | 0.7259 |
| 0.1487 | 141.18 | 7200 | 0.9334 | 0.7420 | 0.7420 |
| 0.1421 | 145.1 | 7400 | 0.9786 | 0.7247 | 0.7247 |
| 0.1428 | 149.02 | 7600 | 0.9591 | 0.7246 | 0.7247 |
| 0.1385 | 152.94 | 7800 | 1.0117 | 0.7189 | 0.7198 |
| 0.1388 | 156.86 | 8000 | 0.9828 | 0.7284 | 0.7284 |
| 0.1371 | 160.78 | 8200 | 0.9930 | 0.7231 | 0.7235 |
| 0.1306 | 164.71 | 8400 | 1.0141 | 0.7234 | 0.7235 |
| 0.1312 | 168.63 | 8600 | 1.0263 | 0.7159 | 0.7160 |
| 0.127 | 172.55 | 8800 | 1.0175 | 0.7258 | 0.7259 |
| 0.1267 | 176.47 | 9000 | 1.0370 | 0.7209 | 0.7210 |
| 0.1236 | 180.39 | 9200 | 1.0419 | 0.7309 | 0.7309 |
| 0.1259 | 184.31 | 9400 | 1.0520 | 0.7209 | 0.7210 |
| 0.123 | 188.24 | 9600 | 1.0421 | 0.7295 | 0.7296 |
| 0.1216 | 192.16 | 9800 | 1.0529 | 0.7232 | 0.7235 |
| 0.1241 | 196.08 | 10000 | 1.0487 | 0.7195 | 0.7198 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_15M", "model-index": [{"name": "GUE_mouse_0-seqsight_4096_512_15M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_mouse_0-seqsight_4096_512_15M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_15M",
"region:us"
] | null | 2024-05-03T18:47:05+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us
| GUE\_mouse\_0-seqsight\_4096\_512\_15M-L32\_f
=============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_15M on the mahdibaghbanzadeh/GUE\_mouse\_0 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.7142
* F1 Score: 0.7214
* Accuracy: 0.7222
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
43,
100,
5,
52
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_mouse_1-seqsight_4096_512_15M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_15M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_15M) on the [mahdibaghbanzadeh/GUE_mouse_1](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_1) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2775
- F1 Score: 0.8750
- Accuracy: 0.8750
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5687 | 0.47 | 200 | 0.4337 | 0.7933 | 0.7933 |
| 0.4338 | 0.95 | 400 | 0.3613 | 0.8368 | 0.8368 |
| 0.3769 | 1.42 | 600 | 0.3328 | 0.8492 | 0.8492 |
| 0.3629 | 1.9 | 800 | 0.3141 | 0.8555 | 0.8557 |
| 0.3363 | 2.37 | 1000 | 0.3057 | 0.8615 | 0.8617 |
| 0.3351 | 2.84 | 1200 | 0.3009 | 0.8655 | 0.8655 |
| 0.3188 | 3.32 | 1400 | 0.2975 | 0.8682 | 0.8682 |
| 0.3239 | 3.79 | 1600 | 0.2905 | 0.8703 | 0.8704 |
| 0.3185 | 4.27 | 1800 | 0.2873 | 0.8682 | 0.8683 |
| 0.3109 | 4.74 | 2000 | 0.2915 | 0.8716 | 0.8716 |
| 0.3152 | 5.21 | 2200 | 0.2859 | 0.8712 | 0.8713 |
| 0.3094 | 5.69 | 2400 | 0.2899 | 0.8703 | 0.8703 |
| 0.3059 | 6.16 | 2600 | 0.2812 | 0.8740 | 0.8741 |
| 0.3028 | 6.64 | 2800 | 0.2797 | 0.8735 | 0.8737 |
| 0.3044 | 7.11 | 3000 | 0.2844 | 0.8747 | 0.8747 |
| 0.2984 | 7.58 | 3200 | 0.3021 | 0.8676 | 0.8676 |
| 0.3031 | 8.06 | 3400 | 0.2783 | 0.8744 | 0.8746 |
| 0.2969 | 8.53 | 3600 | 0.2832 | 0.8771 | 0.8771 |
| 0.3 | 9.0 | 3800 | 0.2786 | 0.8773 | 0.8774 |
| 0.2965 | 9.48 | 4000 | 0.2782 | 0.8771 | 0.8772 |
| 0.297 | 9.95 | 4200 | 0.2798 | 0.8760 | 0.8761 |
| 0.2921 | 10.43 | 4400 | 0.2808 | 0.8758 | 0.8759 |
| 0.2935 | 10.9 | 4600 | 0.2798 | 0.8762 | 0.8762 |
| 0.2904 | 11.37 | 4800 | 0.2826 | 0.8753 | 0.8753 |
| 0.2979 | 11.85 | 5000 | 0.2724 | 0.8791 | 0.8792 |
| 0.2905 | 12.32 | 5200 | 0.2891 | 0.8744 | 0.8744 |
| 0.2946 | 12.8 | 5400 | 0.2817 | 0.8769 | 0.8769 |
| 0.2941 | 13.27 | 5600 | 0.2762 | 0.8765 | 0.8765 |
| 0.2868 | 13.74 | 5800 | 0.2748 | 0.8761 | 0.8762 |
| 0.2885 | 14.22 | 6000 | 0.2802 | 0.8760 | 0.8761 |
| 0.2885 | 14.69 | 6200 | 0.2768 | 0.8773 | 0.8774 |
| 0.2915 | 15.17 | 6400 | 0.2815 | 0.8750 | 0.8750 |
| 0.2901 | 15.64 | 6600 | 0.2724 | 0.8794 | 0.8795 |
| 0.2848 | 16.11 | 6800 | 0.2722 | 0.8782 | 0.8783 |
| 0.2859 | 16.59 | 7000 | 0.2754 | 0.8763 | 0.8764 |
| 0.2858 | 17.06 | 7200 | 0.2763 | 0.8777 | 0.8777 |
| 0.2868 | 17.54 | 7400 | 0.2734 | 0.8772 | 0.8772 |
| 0.2844 | 18.01 | 7600 | 0.2706 | 0.8797 | 0.8798 |
| 0.2816 | 18.48 | 7800 | 0.2759 | 0.8783 | 0.8783 |
| 0.289 | 18.96 | 8000 | 0.2736 | 0.8775 | 0.8775 |
| 0.2848 | 19.43 | 8200 | 0.2724 | 0.8791 | 0.8792 |
| 0.281 | 19.91 | 8400 | 0.2711 | 0.8799 | 0.8799 |
| 0.287 | 20.38 | 8600 | 0.2717 | 0.8785 | 0.8786 |
| 0.2868 | 20.85 | 8800 | 0.2703 | 0.8791 | 0.8792 |
| 0.2809 | 21.33 | 9000 | 0.2714 | 0.8793 | 0.8793 |
| 0.2812 | 21.8 | 9200 | 0.2733 | 0.8791 | 0.8792 |
| 0.2827 | 22.27 | 9400 | 0.2720 | 0.8809 | 0.8809 |
| 0.2832 | 22.75 | 9600 | 0.2713 | 0.8793 | 0.8793 |
| 0.2837 | 23.22 | 9800 | 0.2722 | 0.8788 | 0.8789 |
| 0.2836 | 23.7 | 10000 | 0.2720 | 0.8788 | 0.8789 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_15M", "model-index": [{"name": "GUE_mouse_1-seqsight_4096_512_15M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_mouse_1-seqsight_4096_512_15M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_15M",
"region:us"
] | null | 2024-05-03T18:47:26+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us
| GUE\_mouse\_1-seqsight\_4096\_512\_15M-L1\_f
============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_15M on the mahdibaghbanzadeh/GUE\_mouse\_1 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2775
* F1 Score: 0.8750
* Accuracy: 0.8750
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
43,
100,
5,
52
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | karimgamaleldin/My_finetuned_ocr_model | null | [
"transformers",
"safetensors",
"vision-encoder-decoder",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T18:48:27+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #vision-encoder-decoder #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #vision-encoder-decoder #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
35,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#transformers #safetensors #vision-encoder-decoder #arxiv-1910.09700 #endpoints_compatible #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_mouse_1-seqsight_4096_512_15M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_15M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_15M) on the [mahdibaghbanzadeh/GUE_mouse_1](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_1) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2575
- F1 Score: 0.8841
- Accuracy: 0.8841
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.512 | 0.47 | 200 | 0.3546 | 0.8413 | 0.8414 |
| 0.3557 | 0.95 | 400 | 0.3119 | 0.8643 | 0.8643 |
| 0.3269 | 1.42 | 600 | 0.2998 | 0.8692 | 0.8692 |
| 0.323 | 1.9 | 800 | 0.2866 | 0.8729 | 0.8729 |
| 0.3031 | 2.37 | 1000 | 0.2836 | 0.8738 | 0.8738 |
| 0.3056 | 2.84 | 1200 | 0.2788 | 0.8750 | 0.8750 |
| 0.2907 | 3.32 | 1400 | 0.2784 | 0.8778 | 0.8778 |
| 0.2979 | 3.79 | 1600 | 0.2822 | 0.8740 | 0.8740 |
| 0.29 | 4.27 | 1800 | 0.2641 | 0.8831 | 0.8832 |
| 0.2846 | 4.74 | 2000 | 0.2745 | 0.8786 | 0.8786 |
| 0.2873 | 5.21 | 2200 | 0.2602 | 0.8838 | 0.8839 |
| 0.2801 | 5.69 | 2400 | 0.2711 | 0.8796 | 0.8796 |
| 0.2777 | 6.16 | 2600 | 0.2580 | 0.8847 | 0.8848 |
| 0.2742 | 6.64 | 2800 | 0.2585 | 0.8844 | 0.8845 |
| 0.2744 | 7.11 | 3000 | 0.2672 | 0.8875 | 0.8875 |
| 0.2662 | 7.58 | 3200 | 0.2852 | 0.8787 | 0.8787 |
| 0.2706 | 8.06 | 3400 | 0.2534 | 0.8880 | 0.8881 |
| 0.2632 | 8.53 | 3600 | 0.2730 | 0.8835 | 0.8835 |
| 0.2686 | 9.0 | 3800 | 0.2531 | 0.8906 | 0.8906 |
| 0.2622 | 9.48 | 4000 | 0.2591 | 0.8897 | 0.8897 |
| 0.2627 | 9.95 | 4200 | 0.2543 | 0.8919 | 0.8919 |
| 0.2574 | 10.43 | 4400 | 0.2543 | 0.8903 | 0.8903 |
| 0.2593 | 10.9 | 4600 | 0.2636 | 0.8852 | 0.8852 |
| 0.256 | 11.37 | 4800 | 0.2646 | 0.8878 | 0.8878 |
| 0.263 | 11.85 | 5000 | 0.2527 | 0.8892 | 0.8893 |
| 0.2563 | 12.32 | 5200 | 0.2630 | 0.8860 | 0.8860 |
| 0.2587 | 12.8 | 5400 | 0.2657 | 0.8851 | 0.8851 |
| 0.2572 | 13.27 | 5600 | 0.2545 | 0.8897 | 0.8897 |
| 0.252 | 13.74 | 5800 | 0.2529 | 0.8897 | 0.8897 |
| 0.2523 | 14.22 | 6000 | 0.2604 | 0.8906 | 0.8906 |
| 0.2532 | 14.69 | 6200 | 0.2552 | 0.8919 | 0.8919 |
| 0.2533 | 15.17 | 6400 | 0.2615 | 0.8870 | 0.8870 |
| 0.2536 | 15.64 | 6600 | 0.2504 | 0.8931 | 0.8931 |
| 0.25 | 16.11 | 6800 | 0.2506 | 0.8913 | 0.8913 |
| 0.2483 | 16.59 | 7000 | 0.2577 | 0.8891 | 0.8891 |
| 0.2503 | 17.06 | 7200 | 0.2556 | 0.8888 | 0.8888 |
| 0.2478 | 17.54 | 7400 | 0.2612 | 0.8855 | 0.8855 |
| 0.2473 | 18.01 | 7600 | 0.2487 | 0.8929 | 0.8930 |
| 0.2448 | 18.48 | 7800 | 0.2552 | 0.8897 | 0.8897 |
| 0.2525 | 18.96 | 8000 | 0.2501 | 0.8913 | 0.8913 |
| 0.2465 | 19.43 | 8200 | 0.2479 | 0.8934 | 0.8934 |
| 0.2434 | 19.91 | 8400 | 0.2500 | 0.8910 | 0.8910 |
| 0.2492 | 20.38 | 8600 | 0.2495 | 0.8918 | 0.8918 |
| 0.2471 | 20.85 | 8800 | 0.2484 | 0.8907 | 0.8907 |
| 0.2433 | 21.33 | 9000 | 0.2486 | 0.8921 | 0.8921 |
| 0.2435 | 21.8 | 9200 | 0.2530 | 0.8897 | 0.8897 |
| 0.2438 | 22.27 | 9400 | 0.2509 | 0.8912 | 0.8912 |
| 0.2456 | 22.75 | 9600 | 0.2494 | 0.8904 | 0.8904 |
| 0.2456 | 23.22 | 9800 | 0.2501 | 0.8907 | 0.8907 |
| 0.2452 | 23.7 | 10000 | 0.2494 | 0.8910 | 0.8910 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_15M", "model-index": [{"name": "GUE_mouse_1-seqsight_4096_512_15M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_mouse_1-seqsight_4096_512_15M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_15M",
"region:us"
] | null | 2024-05-03T18:48:52+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us
| GUE\_mouse\_1-seqsight\_4096\_512\_15M-L8\_f
============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_15M on the mahdibaghbanzadeh/GUE\_mouse\_1 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2575
* F1 Score: 0.8841
* Accuracy: 0.8841
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
43,
100,
5,
52
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_mouse_1-seqsight_4096_512_15M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_15M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_15M) on the [mahdibaghbanzadeh/GUE_mouse_1](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_1) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2447
- F1 Score: 0.8915
- Accuracy: 0.8915
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.4601 | 0.47 | 200 | 0.3142 | 0.8631 | 0.8632 |
| 0.3303 | 0.95 | 400 | 0.2984 | 0.8698 | 0.8698 |
| 0.3109 | 1.42 | 600 | 0.2808 | 0.8777 | 0.8777 |
| 0.3049 | 1.9 | 800 | 0.2759 | 0.8765 | 0.8765 |
| 0.289 | 2.37 | 1000 | 0.2769 | 0.8759 | 0.8759 |
| 0.2872 | 2.84 | 1200 | 0.2623 | 0.8827 | 0.8827 |
| 0.2714 | 3.32 | 1400 | 0.2595 | 0.8872 | 0.8872 |
| 0.2761 | 3.79 | 1600 | 0.2817 | 0.8801 | 0.8801 |
| 0.2663 | 4.27 | 1800 | 0.2505 | 0.8901 | 0.8901 |
| 0.2637 | 4.74 | 2000 | 0.2647 | 0.8842 | 0.8842 |
| 0.2641 | 5.21 | 2200 | 0.2466 | 0.8924 | 0.8925 |
| 0.2589 | 5.69 | 2400 | 0.2591 | 0.8842 | 0.8842 |
| 0.2537 | 6.16 | 2600 | 0.2454 | 0.8904 | 0.8906 |
| 0.2532 | 6.64 | 2800 | 0.2483 | 0.8905 | 0.8906 |
| 0.2506 | 7.11 | 3000 | 0.2622 | 0.8861 | 0.8861 |
| 0.2427 | 7.58 | 3200 | 0.2720 | 0.8820 | 0.8820 |
| 0.2482 | 8.06 | 3400 | 0.2399 | 0.8943 | 0.8943 |
| 0.2383 | 8.53 | 3600 | 0.2575 | 0.8885 | 0.8885 |
| 0.2467 | 9.0 | 3800 | 0.2403 | 0.8959 | 0.8959 |
| 0.2381 | 9.48 | 4000 | 0.2419 | 0.8965 | 0.8965 |
| 0.2405 | 9.95 | 4200 | 0.2454 | 0.8944 | 0.8944 |
| 0.233 | 10.43 | 4400 | 0.2462 | 0.8947 | 0.8947 |
| 0.2372 | 10.9 | 4600 | 0.2461 | 0.8922 | 0.8922 |
| 0.2334 | 11.37 | 4800 | 0.2563 | 0.8904 | 0.8904 |
| 0.239 | 11.85 | 5000 | 0.2388 | 0.8965 | 0.8965 |
| 0.2336 | 12.32 | 5200 | 0.2443 | 0.8961 | 0.8961 |
| 0.2343 | 12.8 | 5400 | 0.2508 | 0.8937 | 0.8937 |
| 0.2321 | 13.27 | 5600 | 0.2396 | 0.8968 | 0.8968 |
| 0.2271 | 13.74 | 5800 | 0.2384 | 0.8953 | 0.8953 |
| 0.2287 | 14.22 | 6000 | 0.2490 | 0.8934 | 0.8934 |
| 0.2287 | 14.69 | 6200 | 0.2409 | 0.8980 | 0.8980 |
| 0.2292 | 15.17 | 6400 | 0.2463 | 0.8944 | 0.8944 |
| 0.2292 | 15.64 | 6600 | 0.2372 | 0.8990 | 0.8990 |
| 0.2242 | 16.11 | 6800 | 0.2366 | 0.9011 | 0.9011 |
| 0.2224 | 16.59 | 7000 | 0.2433 | 0.8968 | 0.8968 |
| 0.227 | 17.06 | 7200 | 0.2425 | 0.8973 | 0.8973 |
| 0.2218 | 17.54 | 7400 | 0.2453 | 0.8944 | 0.8944 |
| 0.2223 | 18.01 | 7600 | 0.2321 | 0.9023 | 0.9023 |
| 0.2174 | 18.48 | 7800 | 0.2430 | 0.8949 | 0.8949 |
| 0.227 | 18.96 | 8000 | 0.2380 | 0.8998 | 0.8998 |
| 0.2214 | 19.43 | 8200 | 0.2332 | 0.8995 | 0.8995 |
| 0.2174 | 19.91 | 8400 | 0.2363 | 0.9004 | 0.9004 |
| 0.2208 | 20.38 | 8600 | 0.2368 | 0.8977 | 0.8977 |
| 0.2206 | 20.85 | 8800 | 0.2345 | 0.8999 | 0.8999 |
| 0.2164 | 21.33 | 9000 | 0.2353 | 0.8990 | 0.8990 |
| 0.215 | 21.8 | 9200 | 0.2399 | 0.8958 | 0.8958 |
| 0.2165 | 22.27 | 9400 | 0.2374 | 0.8983 | 0.8983 |
| 0.2181 | 22.75 | 9600 | 0.2360 | 0.8996 | 0.8996 |
| 0.2172 | 23.22 | 9800 | 0.2367 | 0.8996 | 0.8996 |
| 0.2181 | 23.7 | 10000 | 0.2364 | 0.8999 | 0.8999 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_15M", "model-index": [{"name": "GUE_mouse_1-seqsight_4096_512_15M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_mouse_1-seqsight_4096_512_15M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_15M",
"region:us"
] | null | 2024-05-03T18:48:52+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us
| GUE\_mouse\_1-seqsight\_4096\_512\_15M-L32\_f
=============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_15M on the mahdibaghbanzadeh/GUE\_mouse\_1 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2447
* F1 Score: 0.8915
* Accuracy: 0.8915
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
43,
100,
5,
52
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | GamblerOnTrain/T-1 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-03T18:49:17+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
47,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | GamblerOnTrain/T-2 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-03T18:49:28+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
47,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | cilantro9246/56xzwqb | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-03T18:51:02+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
47,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |
text-generation | transformers |
# Uploaded model
- **Developed by:** MarcoMurgia97
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-instruct-v0.2-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl", "sft"], "base_model": "unsloth/mistral-7b-instruct-v0.2-bnb-4bit"} | MarcoMurgia97/model | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/mistral-7b-instruct-v0.2-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"region:us"
] | null | 2024-05-03T18:51:13+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #mistral #text-generation #text-generation-inference #unsloth #trl #sft #conversational #en #base_model-unsloth/mistral-7b-instruct-v0.2-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #4-bit #region-us
|
# Uploaded model
- Developed by: MarcoMurgia97
- License: apache-2.0
- Finetuned from model : unsloth/mistral-7b-instruct-v0.2-bnb-4bit
This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: MarcoMurgia97\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-instruct-v0.2-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #text-generation-inference #unsloth #trl #sft #conversational #en #base_model-unsloth/mistral-7b-instruct-v0.2-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #4-bit #region-us \n",
"# Uploaded model\n\n- Developed by: MarcoMurgia97\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-instruct-v0.2-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
89,
88
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #text-generation-inference #unsloth #trl #sft #conversational #en #base_model-unsloth/mistral-7b-instruct-v0.2-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #4-bit #region-us \n# Uploaded model\n\n- Developed by: MarcoMurgia97\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-instruct-v0.2-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | ferrazzipietro/LS_Llama-2-7b-hf_adapters_en.layer1_NoQuant_16_64_0.01_4_0.0002 | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T18:51:49+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
26,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_mouse_4-seqsight_4096_512_15M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_15M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_15M) on the [mahdibaghbanzadeh/GUE_mouse_4](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_4) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6110
- F1 Score: 0.6554
- Accuracy: 0.6559
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.683 | 1.69 | 200 | 0.6566 | 0.6034 | 0.6033 |
| 0.6508 | 3.39 | 400 | 0.6407 | 0.6230 | 0.6235 |
| 0.6239 | 5.08 | 600 | 0.6308 | 0.6494 | 0.6495 |
| 0.6163 | 6.78 | 800 | 0.6259 | 0.6523 | 0.6527 |
| 0.6116 | 8.47 | 1000 | 0.6196 | 0.6535 | 0.6537 |
| 0.6067 | 10.17 | 1200 | 0.6199 | 0.6473 | 0.6500 |
| 0.6023 | 11.86 | 1400 | 0.6083 | 0.6664 | 0.6665 |
| 0.5984 | 13.56 | 1600 | 0.6065 | 0.6617 | 0.6617 |
| 0.5946 | 15.25 | 1800 | 0.6030 | 0.6704 | 0.6707 |
| 0.5924 | 16.95 | 2000 | 0.6005 | 0.6664 | 0.6676 |
| 0.5883 | 18.64 | 2200 | 0.6036 | 0.6649 | 0.6670 |
| 0.5884 | 20.34 | 2400 | 0.5927 | 0.6783 | 0.6782 |
| 0.5858 | 22.03 | 2600 | 0.6051 | 0.6606 | 0.6649 |
| 0.5826 | 23.73 | 2800 | 0.5936 | 0.6682 | 0.6697 |
| 0.5795 | 25.42 | 3000 | 0.5884 | 0.6846 | 0.6845 |
| 0.582 | 27.12 | 3200 | 0.6029 | 0.6611 | 0.6670 |
| 0.5777 | 28.81 | 3400 | 0.5910 | 0.6725 | 0.6739 |
| 0.5738 | 30.51 | 3600 | 0.5898 | 0.6738 | 0.6755 |
| 0.5762 | 32.2 | 3800 | 0.5907 | 0.6773 | 0.6792 |
| 0.5723 | 33.9 | 4000 | 0.5926 | 0.6742 | 0.6771 |
| 0.5749 | 35.59 | 4200 | 0.5941 | 0.6714 | 0.6750 |
| 0.5725 | 37.29 | 4400 | 0.5861 | 0.6829 | 0.6840 |
| 0.5697 | 38.98 | 4600 | 0.5849 | 0.6865 | 0.6872 |
| 0.5704 | 40.68 | 4800 | 0.5867 | 0.6790 | 0.6808 |
| 0.5636 | 42.37 | 5000 | 0.5876 | 0.6862 | 0.6872 |
| 0.5688 | 44.07 | 5200 | 0.5832 | 0.6948 | 0.6952 |
| 0.5672 | 45.76 | 5400 | 0.5889 | 0.6780 | 0.6808 |
| 0.5659 | 47.46 | 5600 | 0.5863 | 0.6885 | 0.6888 |
| 0.5679 | 49.15 | 5800 | 0.5958 | 0.6677 | 0.6723 |
| 0.5659 | 50.85 | 6000 | 0.5818 | 0.6840 | 0.6851 |
| 0.5643 | 52.54 | 6200 | 0.5843 | 0.6858 | 0.6872 |
| 0.5642 | 54.24 | 6400 | 0.5827 | 0.6878 | 0.6888 |
| 0.5631 | 55.93 | 6600 | 0.5810 | 0.6931 | 0.6936 |
| 0.5644 | 57.63 | 6800 | 0.5784 | 0.7031 | 0.7031 |
| 0.5635 | 59.32 | 7000 | 0.5866 | 0.6773 | 0.6798 |
| 0.5596 | 61.02 | 7200 | 0.5803 | 0.6990 | 0.6994 |
| 0.5629 | 62.71 | 7400 | 0.5813 | 0.6911 | 0.6920 |
| 0.5617 | 64.41 | 7600 | 0.5839 | 0.6920 | 0.6930 |
| 0.5618 | 66.1 | 7800 | 0.5828 | 0.6932 | 0.6941 |
| 0.5576 | 67.8 | 8000 | 0.5818 | 0.6989 | 0.6994 |
| 0.5608 | 69.49 | 8200 | 0.5811 | 0.6956 | 0.6962 |
| 0.5562 | 71.19 | 8400 | 0.5827 | 0.7005 | 0.7010 |
| 0.5598 | 72.88 | 8600 | 0.5803 | 0.6982 | 0.6984 |
| 0.5576 | 74.58 | 8800 | 0.5847 | 0.6898 | 0.6909 |
| 0.5572 | 76.27 | 9000 | 0.5833 | 0.6945 | 0.6952 |
| 0.5579 | 77.97 | 9200 | 0.5821 | 0.6950 | 0.6957 |
| 0.5579 | 79.66 | 9400 | 0.5813 | 0.7012 | 0.7015 |
| 0.5586 | 81.36 | 9600 | 0.5823 | 0.6946 | 0.6952 |
| 0.5566 | 83.05 | 9800 | 0.5814 | 0.7006 | 0.7010 |
| 0.5552 | 84.75 | 10000 | 0.5825 | 0.6934 | 0.6941 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_15M", "model-index": [{"name": "GUE_mouse_4-seqsight_4096_512_15M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_mouse_4-seqsight_4096_512_15M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_15M",
"region:us"
] | null | 2024-05-03T18:52:52+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us
| GUE\_mouse\_4-seqsight\_4096\_512\_15M-L1\_f
============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_15M on the mahdibaghbanzadeh/GUE\_mouse\_4 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.6110
* F1 Score: 0.6554
* Accuracy: 0.6559
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
43,
100,
5,
52
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_mouse_4-seqsight_4096_512_15M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_15M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_15M) on the [mahdibaghbanzadeh/GUE_mouse_4](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_4) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6101
- F1 Score: 0.6606
- Accuracy: 0.6606
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.6674 | 1.69 | 200 | 0.6381 | 0.6401 | 0.6399 |
| 0.6256 | 3.39 | 400 | 0.6245 | 0.6486 | 0.6495 |
| 0.607 | 5.08 | 600 | 0.6118 | 0.6598 | 0.6606 |
| 0.5989 | 6.78 | 800 | 0.6010 | 0.6679 | 0.6691 |
| 0.5915 | 8.47 | 1000 | 0.5868 | 0.6823 | 0.6824 |
| 0.583 | 10.17 | 1200 | 0.5847 | 0.6824 | 0.6840 |
| 0.5796 | 11.86 | 1400 | 0.5738 | 0.6963 | 0.6962 |
| 0.5732 | 13.56 | 1600 | 0.5765 | 0.6930 | 0.6930 |
| 0.5673 | 15.25 | 1800 | 0.5703 | 0.6951 | 0.6952 |
| 0.564 | 16.95 | 2000 | 0.5757 | 0.6873 | 0.6909 |
| 0.5596 | 18.64 | 2200 | 0.5713 | 0.6978 | 0.6984 |
| 0.5596 | 20.34 | 2400 | 0.5668 | 0.7117 | 0.7116 |
| 0.5554 | 22.03 | 2600 | 0.5946 | 0.6729 | 0.6792 |
| 0.5523 | 23.73 | 2800 | 0.5647 | 0.6984 | 0.6989 |
| 0.5483 | 25.42 | 3000 | 0.5646 | 0.6993 | 0.6994 |
| 0.549 | 27.12 | 3200 | 0.5756 | 0.6921 | 0.6941 |
| 0.5452 | 28.81 | 3400 | 0.5690 | 0.7002 | 0.7005 |
| 0.541 | 30.51 | 3600 | 0.5648 | 0.7075 | 0.7074 |
| 0.5424 | 32.2 | 3800 | 0.5669 | 0.7040 | 0.7042 |
| 0.5385 | 33.9 | 4000 | 0.5669 | 0.7035 | 0.7042 |
| 0.538 | 35.59 | 4200 | 0.5762 | 0.6918 | 0.6941 |
| 0.535 | 37.29 | 4400 | 0.5687 | 0.7004 | 0.7010 |
| 0.5322 | 38.98 | 4600 | 0.5692 | 0.6997 | 0.6999 |
| 0.5319 | 40.68 | 4800 | 0.5766 | 0.6927 | 0.6946 |
| 0.5234 | 42.37 | 5000 | 0.5726 | 0.7060 | 0.7063 |
| 0.5279 | 44.07 | 5200 | 0.5664 | 0.7096 | 0.7095 |
| 0.5257 | 45.76 | 5400 | 0.5639 | 0.7023 | 0.7026 |
| 0.5228 | 47.46 | 5600 | 0.5722 | 0.7006 | 0.7005 |
| 0.525 | 49.15 | 5800 | 0.5789 | 0.6988 | 0.6999 |
| 0.5217 | 50.85 | 6000 | 0.5644 | 0.7024 | 0.7026 |
| 0.5195 | 52.54 | 6200 | 0.5624 | 0.7032 | 0.7031 |
| 0.5179 | 54.24 | 6400 | 0.5679 | 0.7021 | 0.7026 |
| 0.516 | 55.93 | 6600 | 0.5662 | 0.6996 | 0.6999 |
| 0.5151 | 57.63 | 6800 | 0.5637 | 0.6995 | 0.6994 |
| 0.5146 | 59.32 | 7000 | 0.5714 | 0.6994 | 0.6999 |
| 0.5122 | 61.02 | 7200 | 0.5674 | 0.7024 | 0.7026 |
| 0.5114 | 62.71 | 7400 | 0.5661 | 0.7042 | 0.7042 |
| 0.5141 | 64.41 | 7600 | 0.5717 | 0.7016 | 0.7021 |
| 0.5115 | 66.1 | 7800 | 0.5735 | 0.7047 | 0.7053 |
| 0.5046 | 67.8 | 8000 | 0.5721 | 0.7021 | 0.7021 |
| 0.5073 | 69.49 | 8200 | 0.5651 | 0.7016 | 0.7015 |
| 0.5053 | 71.19 | 8400 | 0.5697 | 0.7041 | 0.7042 |
| 0.5056 | 72.88 | 8600 | 0.5703 | 0.7010 | 0.7010 |
| 0.5026 | 74.58 | 8800 | 0.5760 | 0.7033 | 0.7037 |
| 0.5047 | 76.27 | 9000 | 0.5747 | 0.7009 | 0.7010 |
| 0.5052 | 77.97 | 9200 | 0.5714 | 0.7051 | 0.7053 |
| 0.506 | 79.66 | 9400 | 0.5708 | 0.7042 | 0.7042 |
| 0.5056 | 81.36 | 9600 | 0.5719 | 0.7040 | 0.7042 |
| 0.504 | 83.05 | 9800 | 0.5709 | 0.7026 | 0.7026 |
| 0.5006 | 84.75 | 10000 | 0.5719 | 0.7019 | 0.7021 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_15M", "model-index": [{"name": "GUE_mouse_4-seqsight_4096_512_15M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_mouse_4-seqsight_4096_512_15M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_15M",
"region:us"
] | null | 2024-05-03T18:53:05+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us
| GUE\_mouse\_4-seqsight\_4096\_512\_15M-L8\_f
============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_15M on the mahdibaghbanzadeh/GUE\_mouse\_4 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.6101
* F1 Score: 0.6606
* Accuracy: 0.6606
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
43,
100,
5,
52
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | golf2248/9i661kg | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-03T18:53:15+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
47,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_mouse_4-seqsight_4096_512_15M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_15M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_15M) on the [mahdibaghbanzadeh/GUE_mouse_4](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_4) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6857
- F1 Score: 0.6840
- Accuracy: 0.6840
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.6544 | 1.69 | 200 | 0.6273 | 0.6539 | 0.6548 |
| 0.6133 | 3.39 | 400 | 0.6103 | 0.6595 | 0.6633 |
| 0.592 | 5.08 | 600 | 0.5848 | 0.6849 | 0.6851 |
| 0.5788 | 6.78 | 800 | 0.5756 | 0.6915 | 0.6914 |
| 0.5698 | 8.47 | 1000 | 0.5724 | 0.6988 | 0.6994 |
| 0.5576 | 10.17 | 1200 | 0.5673 | 0.7050 | 0.7058 |
| 0.5497 | 11.86 | 1400 | 0.5583 | 0.7080 | 0.7079 |
| 0.5383 | 13.56 | 1600 | 0.5708 | 0.6974 | 0.6984 |
| 0.5296 | 15.25 | 1800 | 0.5646 | 0.7053 | 0.7053 |
| 0.5215 | 16.95 | 2000 | 0.5670 | 0.6980 | 0.6999 |
| 0.5131 | 18.64 | 2200 | 0.5732 | 0.7014 | 0.7026 |
| 0.5082 | 20.34 | 2400 | 0.5704 | 0.7107 | 0.7111 |
| 0.4993 | 22.03 | 2600 | 0.5817 | 0.6977 | 0.6994 |
| 0.4907 | 23.73 | 2800 | 0.5626 | 0.7087 | 0.7095 |
| 0.4849 | 25.42 | 3000 | 0.5776 | 0.7162 | 0.7164 |
| 0.4811 | 27.12 | 3200 | 0.5932 | 0.7007 | 0.7021 |
| 0.475 | 28.81 | 3400 | 0.5822 | 0.7147 | 0.7148 |
| 0.462 | 30.51 | 3600 | 0.5907 | 0.7125 | 0.7127 |
| 0.463 | 32.2 | 3800 | 0.6006 | 0.7152 | 0.7153 |
| 0.4569 | 33.9 | 4000 | 0.5908 | 0.7160 | 0.7159 |
| 0.4507 | 35.59 | 4200 | 0.6157 | 0.6987 | 0.7005 |
| 0.4443 | 37.29 | 4400 | 0.6111 | 0.7048 | 0.7053 |
| 0.4404 | 38.98 | 4600 | 0.6148 | 0.7122 | 0.7122 |
| 0.436 | 40.68 | 4800 | 0.6259 | 0.7101 | 0.7106 |
| 0.4232 | 42.37 | 5000 | 0.6250 | 0.7147 | 0.7148 |
| 0.4249 | 44.07 | 5200 | 0.6233 | 0.7070 | 0.7069 |
| 0.421 | 45.76 | 5400 | 0.6165 | 0.7136 | 0.7143 |
| 0.4161 | 47.46 | 5600 | 0.6374 | 0.7165 | 0.7164 |
| 0.4157 | 49.15 | 5800 | 0.6503 | 0.7111 | 0.7111 |
| 0.4082 | 50.85 | 6000 | 0.6373 | 0.7192 | 0.7191 |
| 0.4085 | 52.54 | 6200 | 0.6399 | 0.7153 | 0.7153 |
| 0.3995 | 54.24 | 6400 | 0.6552 | 0.7049 | 0.7053 |
| 0.3972 | 55.93 | 6600 | 0.6393 | 0.7080 | 0.7079 |
| 0.3917 | 57.63 | 6800 | 0.6566 | 0.7165 | 0.7164 |
| 0.3943 | 59.32 | 7000 | 0.6516 | 0.7131 | 0.7132 |
| 0.3912 | 61.02 | 7200 | 0.6507 | 0.7111 | 0.7111 |
| 0.3865 | 62.71 | 7400 | 0.6577 | 0.7079 | 0.7079 |
| 0.3877 | 64.41 | 7600 | 0.6608 | 0.7123 | 0.7127 |
| 0.3805 | 66.1 | 7800 | 0.6760 | 0.7120 | 0.7122 |
| 0.3721 | 67.8 | 8000 | 0.6708 | 0.7086 | 0.7084 |
| 0.3792 | 69.49 | 8200 | 0.6642 | 0.7091 | 0.7090 |
| 0.3775 | 71.19 | 8400 | 0.6657 | 0.7107 | 0.7106 |
| 0.3761 | 72.88 | 8600 | 0.6629 | 0.7096 | 0.7095 |
| 0.3703 | 74.58 | 8800 | 0.6837 | 0.7126 | 0.7127 |
| 0.3693 | 76.27 | 9000 | 0.6859 | 0.7061 | 0.7063 |
| 0.3669 | 77.97 | 9200 | 0.6852 | 0.7084 | 0.7084 |
| 0.3738 | 79.66 | 9400 | 0.6796 | 0.7074 | 0.7074 |
| 0.3689 | 81.36 | 9600 | 0.6798 | 0.7074 | 0.7074 |
| 0.3635 | 83.05 | 9800 | 0.6820 | 0.7080 | 0.7079 |
| 0.3644 | 84.75 | 10000 | 0.6825 | 0.7079 | 0.7079 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_15M", "model-index": [{"name": "GUE_mouse_4-seqsight_4096_512_15M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_mouse_4-seqsight_4096_512_15M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_15M",
"region:us"
] | null | 2024-05-03T18:53:55+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us
| GUE\_mouse\_4-seqsight\_4096\_512\_15M-L32\_f
=============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_15M on the mahdibaghbanzadeh/GUE\_mouse\_4 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.6857
* F1 Score: 0.6840
* Accuracy: 0.6840
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
43,
100,
5,
52
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | fmshahata/phi-moe-switch_2exp | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T18:54:16+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
26,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_mouse_3-seqsight_4096_512_15M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_15M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_15M) on the [mahdibaghbanzadeh/GUE_mouse_3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_3) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5011
- F1 Score: 0.7866
- Accuracy: 0.7866
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.6541 | 13.33 | 200 | 0.5477 | 0.7214 | 0.7238 |
| 0.5726 | 26.67 | 400 | 0.4790 | 0.7776 | 0.7782 |
| 0.519 | 40.0 | 600 | 0.4452 | 0.8067 | 0.8075 |
| 0.4727 | 53.33 | 800 | 0.4042 | 0.8324 | 0.8326 |
| 0.4311 | 66.67 | 1000 | 0.3913 | 0.8617 | 0.8619 |
| 0.3989 | 80.0 | 1200 | 0.3878 | 0.8452 | 0.8452 |
| 0.3732 | 93.33 | 1400 | 0.3857 | 0.8535 | 0.8536 |
| 0.3492 | 106.67 | 1600 | 0.3827 | 0.8490 | 0.8494 |
| 0.3338 | 120.0 | 1800 | 0.3922 | 0.8575 | 0.8577 |
| 0.3175 | 133.33 | 2000 | 0.3908 | 0.8493 | 0.8494 |
| 0.3048 | 146.67 | 2200 | 0.4073 | 0.8575 | 0.8577 |
| 0.2949 | 160.0 | 2400 | 0.4128 | 0.8450 | 0.8452 |
| 0.2834 | 173.33 | 2600 | 0.4232 | 0.8619 | 0.8619 |
| 0.2721 | 186.67 | 2800 | 0.4289 | 0.8452 | 0.8452 |
| 0.2632 | 200.0 | 3000 | 0.4327 | 0.8449 | 0.8452 |
| 0.2587 | 213.33 | 3200 | 0.4534 | 0.8452 | 0.8452 |
| 0.2484 | 226.67 | 3400 | 0.4390 | 0.8534 | 0.8536 |
| 0.2457 | 240.0 | 3600 | 0.4784 | 0.8326 | 0.8326 |
| 0.2434 | 253.33 | 3800 | 0.4525 | 0.8452 | 0.8452 |
| 0.2357 | 266.67 | 4000 | 0.4488 | 0.8368 | 0.8368 |
| 0.2277 | 280.0 | 4200 | 0.4695 | 0.8408 | 0.8410 |
| 0.2331 | 293.33 | 4400 | 0.4660 | 0.8367 | 0.8368 |
| 0.2232 | 306.67 | 4600 | 0.4873 | 0.8405 | 0.8410 |
| 0.2194 | 320.0 | 4800 | 0.4744 | 0.8365 | 0.8368 |
| 0.216 | 333.33 | 5000 | 0.4685 | 0.8282 | 0.8285 |
| 0.213 | 346.67 | 5200 | 0.4692 | 0.8324 | 0.8326 |
| 0.2118 | 360.0 | 5400 | 0.4635 | 0.8325 | 0.8326 |
| 0.2128 | 373.33 | 5600 | 0.4588 | 0.8282 | 0.8285 |
| 0.2054 | 386.67 | 5800 | 0.4728 | 0.8324 | 0.8326 |
| 0.2001 | 400.0 | 6000 | 0.4711 | 0.8282 | 0.8285 |
| 0.1999 | 413.33 | 6200 | 0.4887 | 0.8325 | 0.8326 |
| 0.1954 | 426.67 | 6400 | 0.4876 | 0.8281 | 0.8285 |
| 0.1946 | 440.0 | 6600 | 0.4965 | 0.8240 | 0.8243 |
| 0.1897 | 453.33 | 6800 | 0.4993 | 0.8240 | 0.8243 |
| 0.1918 | 466.67 | 7000 | 0.5031 | 0.8199 | 0.8201 |
| 0.191 | 480.0 | 7200 | 0.5048 | 0.8240 | 0.8243 |
| 0.1854 | 493.33 | 7400 | 0.4900 | 0.8282 | 0.8285 |
| 0.1861 | 506.67 | 7600 | 0.4985 | 0.8200 | 0.8201 |
| 0.1798 | 520.0 | 7800 | 0.5088 | 0.8241 | 0.8243 |
| 0.1871 | 533.33 | 8000 | 0.4990 | 0.8282 | 0.8285 |
| 0.184 | 546.67 | 8200 | 0.5025 | 0.8365 | 0.8368 |
| 0.1864 | 560.0 | 8400 | 0.5094 | 0.8282 | 0.8285 |
| 0.18 | 573.33 | 8600 | 0.5180 | 0.8240 | 0.8243 |
| 0.1826 | 586.67 | 8800 | 0.5049 | 0.8241 | 0.8243 |
| 0.1822 | 600.0 | 9000 | 0.5036 | 0.8241 | 0.8243 |
| 0.1745 | 613.33 | 9200 | 0.5176 | 0.8282 | 0.8285 |
| 0.1757 | 626.67 | 9400 | 0.5141 | 0.8241 | 0.8243 |
| 0.1785 | 640.0 | 9600 | 0.5104 | 0.8241 | 0.8243 |
| 0.1697 | 653.33 | 9800 | 0.5116 | 0.8241 | 0.8243 |
| 0.1809 | 666.67 | 10000 | 0.5110 | 0.8241 | 0.8243 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_15M", "model-index": [{"name": "GUE_mouse_3-seqsight_4096_512_15M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_mouse_3-seqsight_4096_512_15M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_15M",
"region:us"
] | null | 2024-05-03T18:54:17+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us
| GUE\_mouse\_3-seqsight\_4096\_512\_15M-L1\_f
============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_15M on the mahdibaghbanzadeh/GUE\_mouse\_3 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5011
* F1 Score: 0.7866
* Accuracy: 0.7866
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
43,
100,
5,
52
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_mouse_3-seqsight_4096_512_15M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_15M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_15M) on the [mahdibaghbanzadeh/GUE_mouse_3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_3) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9891
- F1 Score: 0.8281
- Accuracy: 0.8285
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.6046 | 13.33 | 200 | 0.4456 | 0.7863 | 0.7866 |
| 0.4398 | 26.67 | 400 | 0.3750 | 0.8479 | 0.8494 |
| 0.3513 | 40.0 | 600 | 0.3555 | 0.8701 | 0.8703 |
| 0.2989 | 53.33 | 800 | 0.3583 | 0.8743 | 0.8745 |
| 0.2647 | 66.67 | 1000 | 0.3853 | 0.8570 | 0.8577 |
| 0.2338 | 80.0 | 1200 | 0.3965 | 0.8529 | 0.8536 |
| 0.2133 | 93.33 | 1400 | 0.3871 | 0.8490 | 0.8494 |
| 0.184 | 106.67 | 1600 | 0.4219 | 0.8529 | 0.8536 |
| 0.172 | 120.0 | 1800 | 0.4005 | 0.8618 | 0.8619 |
| 0.1484 | 133.33 | 2000 | 0.4358 | 0.8661 | 0.8661 |
| 0.1381 | 146.67 | 2200 | 0.4182 | 0.8660 | 0.8661 |
| 0.1287 | 160.0 | 2400 | 0.4432 | 0.8744 | 0.8745 |
| 0.1179 | 173.33 | 2600 | 0.4558 | 0.8703 | 0.8703 |
| 0.112 | 186.67 | 2800 | 0.4523 | 0.8448 | 0.8452 |
| 0.1042 | 200.0 | 3000 | 0.4517 | 0.8742 | 0.8745 |
| 0.094 | 213.33 | 3200 | 0.4399 | 0.8659 | 0.8661 |
| 0.0905 | 226.67 | 3400 | 0.4493 | 0.8619 | 0.8619 |
| 0.0864 | 240.0 | 3600 | 0.4652 | 0.8493 | 0.8494 |
| 0.0823 | 253.33 | 3800 | 0.4940 | 0.8577 | 0.8577 |
| 0.0777 | 266.67 | 4000 | 0.5251 | 0.8703 | 0.8703 |
| 0.0731 | 280.0 | 4200 | 0.5398 | 0.8619 | 0.8619 |
| 0.0718 | 293.33 | 4400 | 0.5079 | 0.8577 | 0.8577 |
| 0.0649 | 306.67 | 4600 | 0.5619 | 0.8661 | 0.8661 |
| 0.064 | 320.0 | 4800 | 0.5438 | 0.8451 | 0.8452 |
| 0.0629 | 333.33 | 5000 | 0.5502 | 0.8536 | 0.8536 |
| 0.0588 | 346.67 | 5200 | 0.5091 | 0.8661 | 0.8661 |
| 0.0575 | 360.0 | 5400 | 0.5668 | 0.8492 | 0.8494 |
| 0.0562 | 373.33 | 5600 | 0.5382 | 0.8826 | 0.8828 |
| 0.0532 | 386.67 | 5800 | 0.5470 | 0.8618 | 0.8619 |
| 0.0517 | 400.0 | 6000 | 0.5525 | 0.8536 | 0.8536 |
| 0.054 | 413.33 | 6200 | 0.5554 | 0.8494 | 0.8494 |
| 0.0497 | 426.67 | 6400 | 0.6015 | 0.8577 | 0.8577 |
| 0.0509 | 440.0 | 6600 | 0.5405 | 0.8618 | 0.8619 |
| 0.0461 | 453.33 | 6800 | 0.5920 | 0.8660 | 0.8661 |
| 0.0466 | 466.67 | 7000 | 0.5824 | 0.8744 | 0.8745 |
| 0.0429 | 480.0 | 7200 | 0.6150 | 0.8744 | 0.8745 |
| 0.0416 | 493.33 | 7400 | 0.5984 | 0.8577 | 0.8577 |
| 0.0401 | 506.67 | 7600 | 0.6160 | 0.8702 | 0.8703 |
| 0.0433 | 520.0 | 7800 | 0.6118 | 0.8576 | 0.8577 |
| 0.0412 | 533.33 | 8000 | 0.5844 | 0.8661 | 0.8661 |
| 0.0381 | 546.67 | 8200 | 0.6168 | 0.8576 | 0.8577 |
| 0.0425 | 560.0 | 8400 | 0.5694 | 0.8703 | 0.8703 |
| 0.0376 | 573.33 | 8600 | 0.5914 | 0.8786 | 0.8787 |
| 0.0389 | 586.67 | 8800 | 0.5895 | 0.8786 | 0.8787 |
| 0.0391 | 600.0 | 9000 | 0.5927 | 0.8745 | 0.8745 |
| 0.0392 | 613.33 | 9200 | 0.6015 | 0.8619 | 0.8619 |
| 0.0374 | 626.67 | 9400 | 0.6041 | 0.8745 | 0.8745 |
| 0.0406 | 640.0 | 9600 | 0.6066 | 0.8660 | 0.8661 |
| 0.037 | 653.33 | 9800 | 0.5993 | 0.8661 | 0.8661 |
| 0.0386 | 666.67 | 10000 | 0.5986 | 0.8703 | 0.8703 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_15M", "model-index": [{"name": "GUE_mouse_3-seqsight_4096_512_15M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_mouse_3-seqsight_4096_512_15M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_15M",
"region:us"
] | null | 2024-05-03T18:55:01+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us
| GUE\_mouse\_3-seqsight\_4096\_512\_15M-L8\_f
============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_15M on the mahdibaghbanzadeh/GUE\_mouse\_3 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.9891
* F1 Score: 0.8281
* Accuracy: 0.8285
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
43,
100,
5,
52
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_mouse_2-seqsight_4096_512_15M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_15M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_15M) on the [mahdibaghbanzadeh/GUE_mouse_2](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_2) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2779
- F1 Score: 0.8780
- Accuracy: 0.8780
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.4865 | 9.52 | 200 | 0.3900 | 0.7925 | 0.7927 |
| 0.3502 | 19.05 | 400 | 0.3748 | 0.8132 | 0.8140 |
| 0.3093 | 28.57 | 600 | 0.3435 | 0.8445 | 0.8445 |
| 0.2862 | 38.1 | 800 | 0.3369 | 0.8596 | 0.8598 |
| 0.2678 | 47.62 | 1000 | 0.3191 | 0.8628 | 0.8628 |
| 0.2521 | 57.14 | 1200 | 0.3183 | 0.8811 | 0.8811 |
| 0.2367 | 66.67 | 1400 | 0.3310 | 0.8871 | 0.8872 |
| 0.2323 | 76.19 | 1600 | 0.3365 | 0.8750 | 0.875 |
| 0.2212 | 85.71 | 1800 | 0.3334 | 0.8750 | 0.875 |
| 0.2104 | 95.24 | 2000 | 0.3484 | 0.8902 | 0.8902 |
| 0.208 | 104.76 | 2200 | 0.3370 | 0.8779 | 0.8780 |
| 0.2009 | 114.29 | 2400 | 0.3393 | 0.8902 | 0.8902 |
| 0.1977 | 123.81 | 2600 | 0.3537 | 0.8810 | 0.8811 |
| 0.1943 | 133.33 | 2800 | 0.3527 | 0.8780 | 0.8780 |
| 0.1876 | 142.86 | 3000 | 0.3524 | 0.8780 | 0.8780 |
| 0.182 | 152.38 | 3200 | 0.3624 | 0.8779 | 0.8780 |
| 0.1797 | 161.9 | 3400 | 0.3658 | 0.8780 | 0.8780 |
| 0.1735 | 171.43 | 3600 | 0.3764 | 0.8658 | 0.8659 |
| 0.1693 | 180.95 | 3800 | 0.3777 | 0.8780 | 0.8780 |
| 0.1655 | 190.48 | 4000 | 0.3893 | 0.8719 | 0.8720 |
| 0.1637 | 200.0 | 4200 | 0.3935 | 0.8628 | 0.8628 |
| 0.1606 | 209.52 | 4400 | 0.3768 | 0.8719 | 0.8720 |
| 0.1592 | 219.05 | 4600 | 0.3964 | 0.8719 | 0.8720 |
| 0.1559 | 228.57 | 4800 | 0.4098 | 0.8658 | 0.8659 |
| 0.1559 | 238.1 | 5000 | 0.4274 | 0.8567 | 0.8567 |
| 0.153 | 247.62 | 5200 | 0.4074 | 0.8689 | 0.8689 |
| 0.1479 | 257.14 | 5400 | 0.4058 | 0.8689 | 0.8689 |
| 0.1458 | 266.67 | 5600 | 0.4290 | 0.8628 | 0.8628 |
| 0.1473 | 276.19 | 5800 | 0.4177 | 0.8567 | 0.8567 |
| 0.1406 | 285.71 | 6000 | 0.4082 | 0.8719 | 0.8720 |
| 0.1417 | 295.24 | 6200 | 0.4173 | 0.8628 | 0.8628 |
| 0.1401 | 304.76 | 6400 | 0.4135 | 0.8567 | 0.8567 |
| 0.1396 | 314.29 | 6600 | 0.4015 | 0.8750 | 0.875 |
| 0.1398 | 323.81 | 6800 | 0.4065 | 0.8567 | 0.8567 |
| 0.1351 | 333.33 | 7000 | 0.4180 | 0.8659 | 0.8659 |
| 0.1369 | 342.86 | 7200 | 0.4041 | 0.8659 | 0.8659 |
| 0.1295 | 352.38 | 7400 | 0.4232 | 0.8689 | 0.8689 |
| 0.1362 | 361.9 | 7600 | 0.4126 | 0.8597 | 0.8598 |
| 0.1354 | 371.43 | 7800 | 0.4204 | 0.8689 | 0.8689 |
| 0.1271 | 380.95 | 8000 | 0.4247 | 0.8658 | 0.8659 |
| 0.1326 | 390.48 | 8200 | 0.4123 | 0.8658 | 0.8659 |
| 0.1288 | 400.0 | 8400 | 0.4256 | 0.8628 | 0.8628 |
| 0.128 | 409.52 | 8600 | 0.4215 | 0.8750 | 0.875 |
| 0.1237 | 419.05 | 8800 | 0.4332 | 0.8597 | 0.8598 |
| 0.1299 | 428.57 | 9000 | 0.4228 | 0.8597 | 0.8598 |
| 0.1278 | 438.1 | 9200 | 0.4213 | 0.8689 | 0.8689 |
| 0.1271 | 447.62 | 9400 | 0.4287 | 0.8597 | 0.8598 |
| 0.1241 | 457.14 | 9600 | 0.4240 | 0.8689 | 0.8689 |
| 0.1241 | 466.67 | 9800 | 0.4266 | 0.8689 | 0.8689 |
| 0.124 | 476.19 | 10000 | 0.4254 | 0.8689 | 0.8689 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_15M", "model-index": [{"name": "GUE_mouse_2-seqsight_4096_512_15M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_mouse_2-seqsight_4096_512_15M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_15M",
"region:us"
] | null | 2024-05-03T18:56:34+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us
| GUE\_mouse\_2-seqsight\_4096\_512\_15M-L1\_f
============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_15M on the mahdibaghbanzadeh/GUE\_mouse\_2 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2779
* F1 Score: 0.8780
* Accuracy: 0.8780
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
43,
100,
5,
52
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_mouse_3-seqsight_4096_512_15M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_15M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_15M) on the [mahdibaghbanzadeh/GUE_mouse_3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_3) dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3687
- F1 Score: 0.8368
- Accuracy: 0.8368
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.5528 | 13.33 | 200 | 0.3800 | 0.8360 | 0.8368 |
| 0.3385 | 26.67 | 400 | 0.3713 | 0.8654 | 0.8661 |
| 0.2527 | 40.0 | 600 | 0.3684 | 0.8619 | 0.8619 |
| 0.1912 | 53.33 | 800 | 0.4153 | 0.8535 | 0.8536 |
| 0.1505 | 66.67 | 1000 | 0.4190 | 0.8614 | 0.8619 |
| 0.1168 | 80.0 | 1200 | 0.4873 | 0.8698 | 0.8703 |
| 0.0996 | 93.33 | 1400 | 0.4635 | 0.8536 | 0.8536 |
| 0.0851 | 106.67 | 1600 | 0.5159 | 0.8529 | 0.8536 |
| 0.0699 | 120.0 | 1800 | 0.5508 | 0.8784 | 0.8787 |
| 0.0595 | 133.33 | 2000 | 0.5934 | 0.8574 | 0.8577 |
| 0.0505 | 146.67 | 2200 | 0.6054 | 0.8576 | 0.8577 |
| 0.0504 | 160.0 | 2400 | 0.5825 | 0.8701 | 0.8703 |
| 0.0411 | 173.33 | 2600 | 0.6155 | 0.8619 | 0.8619 |
| 0.0388 | 186.67 | 2800 | 0.6479 | 0.8661 | 0.8661 |
| 0.0359 | 200.0 | 3000 | 0.6592 | 0.8615 | 0.8619 |
| 0.0327 | 213.33 | 3200 | 0.7244 | 0.8702 | 0.8703 |
| 0.0315 | 226.67 | 3400 | 0.6512 | 0.8577 | 0.8577 |
| 0.0306 | 240.0 | 3600 | 0.6895 | 0.8576 | 0.8577 |
| 0.029 | 253.33 | 3800 | 0.7618 | 0.8577 | 0.8577 |
| 0.0271 | 266.67 | 4000 | 0.7633 | 0.8534 | 0.8536 |
| 0.022 | 280.0 | 4200 | 0.7738 | 0.8492 | 0.8494 |
| 0.0238 | 293.33 | 4400 | 0.7606 | 0.8618 | 0.8619 |
| 0.0195 | 306.67 | 4600 | 0.7799 | 0.8659 | 0.8661 |
| 0.0208 | 320.0 | 4800 | 0.7812 | 0.8617 | 0.8619 |
| 0.0182 | 333.33 | 5000 | 0.7868 | 0.8744 | 0.8745 |
| 0.0203 | 346.67 | 5200 | 0.8271 | 0.8739 | 0.8745 |
| 0.0178 | 360.0 | 5400 | 0.7416 | 0.8703 | 0.8703 |
| 0.016 | 373.33 | 5600 | 0.7589 | 0.8744 | 0.8745 |
| 0.0178 | 386.67 | 5800 | 0.7254 | 0.8702 | 0.8703 |
| 0.016 | 400.0 | 6000 | 0.7839 | 0.8619 | 0.8619 |
| 0.0136 | 413.33 | 6200 | 0.8584 | 0.8619 | 0.8619 |
| 0.0143 | 426.67 | 6400 | 0.8470 | 0.8783 | 0.8787 |
| 0.0135 | 440.0 | 6600 | 0.8244 | 0.8744 | 0.8745 |
| 0.0149 | 453.33 | 6800 | 0.7704 | 0.8786 | 0.8787 |
| 0.0146 | 466.67 | 7000 | 0.8063 | 0.8744 | 0.8745 |
| 0.0116 | 480.0 | 7200 | 0.8048 | 0.8619 | 0.8619 |
| 0.01 | 493.33 | 7400 | 0.8597 | 0.8744 | 0.8745 |
| 0.0113 | 506.67 | 7600 | 0.8415 | 0.8660 | 0.8661 |
| 0.0096 | 520.0 | 7800 | 0.8698 | 0.8786 | 0.8787 |
| 0.011 | 533.33 | 8000 | 0.8537 | 0.8619 | 0.8619 |
| 0.0095 | 546.67 | 8200 | 0.8248 | 0.8786 | 0.8787 |
| 0.0099 | 560.0 | 8400 | 0.8428 | 0.8701 | 0.8703 |
| 0.0093 | 573.33 | 8600 | 0.8373 | 0.8619 | 0.8619 |
| 0.0099 | 586.67 | 8800 | 0.8226 | 0.8619 | 0.8619 |
| 0.0075 | 600.0 | 9000 | 0.8746 | 0.8744 | 0.8745 |
| 0.0087 | 613.33 | 9200 | 0.8475 | 0.8661 | 0.8661 |
| 0.0083 | 626.67 | 9400 | 0.8725 | 0.8661 | 0.8661 |
| 0.0101 | 640.0 | 9600 | 0.8741 | 0.8619 | 0.8619 |
| 0.007 | 653.33 | 9800 | 0.8736 | 0.8702 | 0.8703 |
| 0.0107 | 666.67 | 10000 | 0.8653 | 0.8661 | 0.8661 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_15M", "model-index": [{"name": "GUE_mouse_3-seqsight_4096_512_15M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_mouse_3-seqsight_4096_512_15M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_15M",
"region:us"
] | null | 2024-05-03T18:56:34+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us
| GUE\_mouse\_3-seqsight\_4096\_512\_15M-L32\_f
=============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_15M on the mahdibaghbanzadeh/GUE\_mouse\_3 dataset.
It achieves the following results on the evaluation set:
* Loss: 1.3687
* F1 Score: 0.8368
* Accuracy: 0.8368
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
43,
100,
5,
52
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_mouse_2-seqsight_4096_512_15M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_15M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_15M) on the [mahdibaghbanzadeh/GUE_mouse_2](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_2) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6301
- F1 Score: 0.8780
- Accuracy: 0.8780
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.4196 | 9.52 | 200 | 0.3453 | 0.8411 | 0.8415 |
| 0.2843 | 19.05 | 400 | 0.3329 | 0.8658 | 0.8659 |
| 0.2436 | 28.57 | 600 | 0.3090 | 0.8840 | 0.8841 |
| 0.217 | 38.1 | 800 | 0.3200 | 0.8841 | 0.8841 |
| 0.196 | 47.62 | 1000 | 0.3318 | 0.8872 | 0.8872 |
| 0.1761 | 57.14 | 1200 | 0.3589 | 0.8658 | 0.8659 |
| 0.1625 | 66.67 | 1400 | 0.3791 | 0.8720 | 0.8720 |
| 0.1532 | 76.19 | 1600 | 0.4121 | 0.8627 | 0.8628 |
| 0.1401 | 85.71 | 1800 | 0.3999 | 0.8658 | 0.8659 |
| 0.1265 | 95.24 | 2000 | 0.4633 | 0.8749 | 0.875 |
| 0.1219 | 104.76 | 2200 | 0.4277 | 0.8689 | 0.8689 |
| 0.1168 | 114.29 | 2400 | 0.4125 | 0.8659 | 0.8659 |
| 0.1082 | 123.81 | 2600 | 0.4537 | 0.8689 | 0.8689 |
| 0.0991 | 133.33 | 2800 | 0.4396 | 0.8780 | 0.8780 |
| 0.0917 | 142.86 | 3000 | 0.4864 | 0.8749 | 0.875 |
| 0.0843 | 152.38 | 3200 | 0.4842 | 0.8720 | 0.8720 |
| 0.0858 | 161.9 | 3400 | 0.4928 | 0.8628 | 0.8628 |
| 0.0744 | 171.43 | 3600 | 0.5215 | 0.8750 | 0.875 |
| 0.0724 | 180.95 | 3800 | 0.5353 | 0.8658 | 0.8659 |
| 0.0697 | 190.48 | 4000 | 0.5285 | 0.8841 | 0.8841 |
| 0.0643 | 200.0 | 4200 | 0.5673 | 0.8780 | 0.8780 |
| 0.0653 | 209.52 | 4400 | 0.5322 | 0.8841 | 0.8841 |
| 0.0614 | 219.05 | 4600 | 0.5746 | 0.8750 | 0.875 |
| 0.0534 | 228.57 | 4800 | 0.6451 | 0.8719 | 0.8720 |
| 0.0571 | 238.1 | 5000 | 0.6393 | 0.8746 | 0.875 |
| 0.0535 | 247.62 | 5200 | 0.5712 | 0.8841 | 0.8841 |
| 0.0496 | 257.14 | 5400 | 0.6100 | 0.8779 | 0.8780 |
| 0.0464 | 266.67 | 5600 | 0.6278 | 0.8871 | 0.8872 |
| 0.0495 | 276.19 | 5800 | 0.6104 | 0.8840 | 0.8841 |
| 0.0446 | 285.71 | 6000 | 0.6431 | 0.8779 | 0.8780 |
| 0.0449 | 295.24 | 6200 | 0.6230 | 0.8841 | 0.8841 |
| 0.0427 | 304.76 | 6400 | 0.6259 | 0.8750 | 0.875 |
| 0.0434 | 314.29 | 6600 | 0.6362 | 0.8810 | 0.8811 |
| 0.0426 | 323.81 | 6800 | 0.6241 | 0.8840 | 0.8841 |
| 0.0403 | 333.33 | 7000 | 0.6379 | 0.8871 | 0.8872 |
| 0.0391 | 342.86 | 7200 | 0.6461 | 0.8780 | 0.8780 |
| 0.0377 | 352.38 | 7400 | 0.6628 | 0.8841 | 0.8841 |
| 0.0372 | 361.9 | 7600 | 0.6478 | 0.8901 | 0.8902 |
| 0.0347 | 371.43 | 7800 | 0.6833 | 0.8870 | 0.8872 |
| 0.0342 | 380.95 | 8000 | 0.6775 | 0.8871 | 0.8872 |
| 0.0345 | 390.48 | 8200 | 0.6877 | 0.8870 | 0.8872 |
| 0.0341 | 400.0 | 8400 | 0.6865 | 0.8932 | 0.8933 |
| 0.0353 | 409.52 | 8600 | 0.6739 | 0.8870 | 0.8872 |
| 0.0325 | 419.05 | 8800 | 0.6810 | 0.8870 | 0.8872 |
| 0.0341 | 428.57 | 9000 | 0.6819 | 0.8870 | 0.8872 |
| 0.0315 | 438.1 | 9200 | 0.6780 | 0.8870 | 0.8872 |
| 0.0305 | 447.62 | 9400 | 0.6929 | 0.8870 | 0.8872 |
| 0.029 | 457.14 | 9600 | 0.6948 | 0.8870 | 0.8872 |
| 0.0291 | 466.67 | 9800 | 0.6944 | 0.8870 | 0.8872 |
| 0.0317 | 476.19 | 10000 | 0.6863 | 0.8901 | 0.8902 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_15M", "model-index": [{"name": "GUE_mouse_2-seqsight_4096_512_15M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_mouse_2-seqsight_4096_512_15M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_15M",
"region:us"
] | null | 2024-05-03T18:57:20+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us
| GUE\_mouse\_2-seqsight\_4096\_512\_15M-L8\_f
============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_15M on the mahdibaghbanzadeh/GUE\_mouse\_2 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.6301
* F1 Score: 0.8780
* Accuracy: 0.8780
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
43,
100,
5,
52
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | golf2248/bt5ia0i | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-03T18:57:35+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
47,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_mouse_2-seqsight_4096_512_15M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_15M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_15M) on the [mahdibaghbanzadeh/GUE_mouse_2](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_2) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5027
- F1 Score: 0.8750
- Accuracy: 0.875
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.3806 | 9.52 | 200 | 0.3269 | 0.8597 | 0.8598 |
| 0.2415 | 19.05 | 400 | 0.3152 | 0.8658 | 0.8659 |
| 0.1924 | 28.57 | 600 | 0.3159 | 0.8811 | 0.8811 |
| 0.16 | 38.1 | 800 | 0.3525 | 0.8689 | 0.8689 |
| 0.1335 | 47.62 | 1000 | 0.4108 | 0.8780 | 0.8780 |
| 0.1085 | 57.14 | 1200 | 0.4464 | 0.8749 | 0.875 |
| 0.0881 | 66.67 | 1400 | 0.4792 | 0.8689 | 0.8689 |
| 0.073 | 76.19 | 1600 | 0.5455 | 0.8780 | 0.8780 |
| 0.0586 | 85.71 | 1800 | 0.5630 | 0.8811 | 0.8811 |
| 0.049 | 95.24 | 2000 | 0.6062 | 0.8869 | 0.8872 |
| 0.0428 | 104.76 | 2200 | 0.5581 | 0.8994 | 0.8994 |
| 0.0415 | 114.29 | 2400 | 0.5765 | 0.8871 | 0.8872 |
| 0.0343 | 123.81 | 2600 | 0.5885 | 0.8902 | 0.8902 |
| 0.0314 | 133.33 | 2800 | 0.6035 | 0.8872 | 0.8872 |
| 0.0289 | 142.86 | 3000 | 0.6384 | 0.8840 | 0.8841 |
| 0.0236 | 152.38 | 3200 | 0.7343 | 0.8871 | 0.8872 |
| 0.0282 | 161.9 | 3400 | 0.7517 | 0.8748 | 0.875 |
| 0.0214 | 171.43 | 3600 | 0.7033 | 0.8931 | 0.8933 |
| 0.0207 | 180.95 | 3800 | 0.7171 | 0.8839 | 0.8841 |
| 0.0199 | 190.48 | 4000 | 0.7869 | 0.8840 | 0.8841 |
| 0.0173 | 200.0 | 4200 | 0.8030 | 0.8841 | 0.8841 |
| 0.0178 | 209.52 | 4400 | 0.7712 | 0.8808 | 0.8811 |
| 0.0174 | 219.05 | 4600 | 0.6971 | 0.8963 | 0.8963 |
| 0.0138 | 228.57 | 4800 | 0.8396 | 0.8658 | 0.8659 |
| 0.0138 | 238.1 | 5000 | 0.7916 | 0.8808 | 0.8811 |
| 0.0128 | 247.62 | 5200 | 0.7458 | 0.8963 | 0.8963 |
| 0.0127 | 257.14 | 5400 | 0.8319 | 0.8901 | 0.8902 |
| 0.0109 | 266.67 | 5600 | 0.8372 | 0.8870 | 0.8872 |
| 0.0111 | 276.19 | 5800 | 0.8510 | 0.8901 | 0.8902 |
| 0.0112 | 285.71 | 6000 | 0.8220 | 0.8870 | 0.8872 |
| 0.0097 | 295.24 | 6200 | 0.8300 | 0.8901 | 0.8902 |
| 0.0102 | 304.76 | 6400 | 0.8657 | 0.8870 | 0.8872 |
| 0.01 | 314.29 | 6600 | 0.8303 | 0.8870 | 0.8872 |
| 0.0088 | 323.81 | 6800 | 0.8713 | 0.8809 | 0.8811 |
| 0.0087 | 333.33 | 7000 | 0.8464 | 0.8901 | 0.8902 |
| 0.0084 | 342.86 | 7200 | 0.8823 | 0.8932 | 0.8933 |
| 0.007 | 352.38 | 7400 | 0.9236 | 0.8840 | 0.8841 |
| 0.0088 | 361.9 | 7600 | 0.8623 | 0.8870 | 0.8872 |
| 0.0065 | 371.43 | 7800 | 0.8455 | 0.8932 | 0.8933 |
| 0.0068 | 380.95 | 8000 | 0.8949 | 0.8963 | 0.8963 |
| 0.0084 | 390.48 | 8200 | 0.8279 | 0.8962 | 0.8963 |
| 0.0063 | 400.0 | 8400 | 0.8768 | 0.8901 | 0.8902 |
| 0.0057 | 409.52 | 8600 | 0.9100 | 0.8901 | 0.8902 |
| 0.0069 | 419.05 | 8800 | 0.8906 | 0.8932 | 0.8933 |
| 0.0065 | 428.57 | 9000 | 0.9110 | 0.8840 | 0.8841 |
| 0.0069 | 438.1 | 9200 | 0.8759 | 0.8901 | 0.8902 |
| 0.0053 | 447.62 | 9400 | 0.9003 | 0.8932 | 0.8933 |
| 0.0046 | 457.14 | 9600 | 0.9264 | 0.8901 | 0.8902 |
| 0.0046 | 466.67 | 9800 | 0.9214 | 0.8963 | 0.8963 |
| 0.0051 | 476.19 | 10000 | 0.9170 | 0.8963 | 0.8963 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_15M", "model-index": [{"name": "GUE_mouse_2-seqsight_4096_512_15M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_mouse_2-seqsight_4096_512_15M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_15M",
"region:us"
] | null | 2024-05-03T18:57:46+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us
| GUE\_mouse\_2-seqsight\_4096\_512\_15M-L32\_f
=============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_15M on the mahdibaghbanzadeh/GUE\_mouse\_2 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5027
* F1 Score: 0.8750
* Accuracy: 0.875
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
43,
100,
5,
52
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | transformers |
# Uploaded model
- **Developed by:** yeetech19
- **License:** apache-2.0
- **Finetuned from model :** zhichen/Llama3-Chinese
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "zhichen/Llama3-Chinese"} | yeetech19/lora_adapter_v2 | null | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:zhichen/Llama3-Chinese",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T18:57:46+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-zhichen/Llama3-Chinese #license-apache-2.0 #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: yeetech19
- License: apache-2.0
- Finetuned from model : zhichen/Llama3-Chinese
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: yeetech19\n- License: apache-2.0\n- Finetuned from model : zhichen/Llama3-Chinese\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-zhichen/Llama3-Chinese #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: yeetech19\n- License: apache-2.0\n- Finetuned from model : zhichen/Llama3-Chinese\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
56,
73
] | [
"TAGS\n#transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-zhichen/Llama3-Chinese #license-apache-2.0 #endpoints_compatible #region-us \n# Uploaded model\n\n- Developed by: yeetech19\n- License: apache-2.0\n- Finetuned from model : zhichen/Llama3-Chinese\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | ferrazzipietro/LS_Llama-2-7b-hf_adapters_en.layer1_NoQuant_16_64_0.01_8_5e-05 | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T18:58:16+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
26,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_splice_reconstructed-seqsight_4096_512_15M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_15M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_15M) on the [mahdibaghbanzadeh/GUE_splice_reconstructed](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_splice_reconstructed) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4189
- F1 Score: 0.8268
- Accuracy: 0.8262
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.9723 | 0.7 | 200 | 0.9473 | 0.4075 | 0.5644 |
| 0.9335 | 1.4 | 400 | 0.8961 | 0.5071 | 0.5804 |
| 0.7745 | 2.1 | 600 | 0.6515 | 0.7094 | 0.7100 |
| 0.6342 | 2.8 | 800 | 0.5971 | 0.7378 | 0.7376 |
| 0.601 | 3.5 | 1000 | 0.5694 | 0.7518 | 0.7508 |
| 0.5767 | 4.2 | 1200 | 0.5445 | 0.7677 | 0.7674 |
| 0.5527 | 4.9 | 1400 | 0.5276 | 0.7729 | 0.7722 |
| 0.5385 | 5.59 | 1600 | 0.5263 | 0.7760 | 0.7751 |
| 0.5382 | 6.29 | 1800 | 0.5080 | 0.7836 | 0.7830 |
| 0.5148 | 6.99 | 2000 | 0.5005 | 0.7827 | 0.7821 |
| 0.5161 | 7.69 | 2200 | 0.4859 | 0.7865 | 0.7872 |
| 0.5107 | 8.39 | 2400 | 0.4937 | 0.7897 | 0.7889 |
| 0.5047 | 9.09 | 2600 | 0.4884 | 0.7899 | 0.7891 |
| 0.4975 | 9.79 | 2800 | 0.4804 | 0.7944 | 0.7937 |
| 0.4949 | 10.49 | 3000 | 0.4809 | 0.7965 | 0.7957 |
| 0.4894 | 11.19 | 3200 | 0.4876 | 0.7936 | 0.7926 |
| 0.4921 | 11.89 | 3400 | 0.4676 | 0.7984 | 0.7979 |
| 0.4804 | 12.59 | 3600 | 0.4771 | 0.7979 | 0.7970 |
| 0.4785 | 13.29 | 3800 | 0.4690 | 0.8005 | 0.7996 |
| 0.4801 | 13.99 | 4000 | 0.4731 | 0.8043 | 0.8034 |
| 0.4741 | 14.69 | 4200 | 0.4591 | 0.8068 | 0.8062 |
| 0.4781 | 15.38 | 4400 | 0.4695 | 0.8008 | 0.7999 |
| 0.4698 | 16.08 | 4600 | 0.4857 | 0.7940 | 0.7931 |
| 0.4668 | 16.78 | 4800 | 0.4646 | 0.8041 | 0.8032 |
| 0.4631 | 17.48 | 5000 | 0.4707 | 0.8016 | 0.8005 |
| 0.4611 | 18.18 | 5200 | 0.4489 | 0.8111 | 0.8106 |
| 0.4625 | 18.88 | 5400 | 0.4618 | 0.8052 | 0.8043 |
| 0.4656 | 19.58 | 5600 | 0.4474 | 0.8121 | 0.8115 |
| 0.4619 | 20.28 | 5800 | 0.4488 | 0.8120 | 0.8113 |
| 0.4579 | 20.98 | 6000 | 0.4470 | 0.8124 | 0.8117 |
| 0.4604 | 21.68 | 6200 | 0.4558 | 0.8100 | 0.8091 |
| 0.4515 | 22.38 | 6400 | 0.4484 | 0.8118 | 0.8110 |
| 0.4469 | 23.08 | 6600 | 0.4473 | 0.8136 | 0.8128 |
| 0.4531 | 23.78 | 6800 | 0.4492 | 0.8103 | 0.8095 |
| 0.4489 | 24.48 | 7000 | 0.4544 | 0.8104 | 0.8095 |
| 0.45 | 25.17 | 7200 | 0.4502 | 0.8119 | 0.8110 |
| 0.448 | 25.87 | 7400 | 0.4499 | 0.8141 | 0.8132 |
| 0.448 | 26.57 | 7600 | 0.4580 | 0.8096 | 0.8086 |
| 0.4459 | 27.27 | 7800 | 0.4528 | 0.8128 | 0.8119 |
| 0.4497 | 27.97 | 8000 | 0.4429 | 0.8158 | 0.8150 |
| 0.4456 | 28.67 | 8200 | 0.4539 | 0.8115 | 0.8106 |
| 0.4408 | 29.37 | 8400 | 0.4443 | 0.8146 | 0.8137 |
| 0.4427 | 30.07 | 8600 | 0.4370 | 0.8185 | 0.8178 |
| 0.4435 | 30.77 | 8800 | 0.4456 | 0.8146 | 0.8137 |
| 0.4419 | 31.47 | 9000 | 0.4465 | 0.8130 | 0.8121 |
| 0.4429 | 32.17 | 9200 | 0.4433 | 0.8152 | 0.8143 |
| 0.4408 | 32.87 | 9400 | 0.4452 | 0.8141 | 0.8132 |
| 0.4474 | 33.57 | 9600 | 0.4457 | 0.8146 | 0.8137 |
| 0.4383 | 34.27 | 9800 | 0.4440 | 0.8156 | 0.8148 |
| 0.4384 | 34.97 | 10000 | 0.4441 | 0.8150 | 0.8141 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_15M", "model-index": [{"name": "GUE_splice_reconstructed-seqsight_4096_512_15M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_splice_reconstructed-seqsight_4096_512_15M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_15M",
"region:us"
] | null | 2024-05-03T18:59:52+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us
| GUE\_splice\_reconstructed-seqsight\_4096\_512\_15M-L1\_f
=========================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_15M on the mahdibaghbanzadeh/GUE\_splice\_reconstructed dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4189
* F1 Score: 0.8268
* Accuracy: 0.8262
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
43,
100,
5,
52
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_splice_reconstructed-seqsight_4096_512_15M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_15M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_15M) on the [mahdibaghbanzadeh/GUE_splice_reconstructed](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_splice_reconstructed) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3625
- F1 Score: 0.8558
- Accuracy: 0.8551
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.9617 | 0.7 | 200 | 0.9080 | 0.4586 | 0.5623 |
| 0.7438 | 1.4 | 400 | 0.5836 | 0.7380 | 0.7378 |
| 0.5658 | 2.1 | 600 | 0.5211 | 0.7703 | 0.7696 |
| 0.5131 | 2.8 | 800 | 0.4952 | 0.7834 | 0.7826 |
| 0.4935 | 3.5 | 1000 | 0.4709 | 0.7979 | 0.7972 |
| 0.4825 | 4.2 | 1200 | 0.4707 | 0.7963 | 0.7950 |
| 0.4655 | 4.9 | 1400 | 0.4551 | 0.8074 | 0.8064 |
| 0.4545 | 5.59 | 1600 | 0.4599 | 0.8087 | 0.8084 |
| 0.4506 | 6.29 | 1800 | 0.4387 | 0.8145 | 0.8135 |
| 0.4324 | 6.99 | 2000 | 0.4328 | 0.8196 | 0.8185 |
| 0.4262 | 7.69 | 2200 | 0.4177 | 0.8231 | 0.8224 |
| 0.4265 | 8.39 | 2400 | 0.4268 | 0.8242 | 0.8233 |
| 0.4156 | 9.09 | 2600 | 0.4281 | 0.8247 | 0.8235 |
| 0.4113 | 9.79 | 2800 | 0.4151 | 0.8262 | 0.8253 |
| 0.4078 | 10.49 | 3000 | 0.4062 | 0.8322 | 0.8314 |
| 0.4013 | 11.19 | 3200 | 0.4180 | 0.8284 | 0.8273 |
| 0.4036 | 11.89 | 3400 | 0.4114 | 0.8299 | 0.8290 |
| 0.3911 | 12.59 | 3600 | 0.4182 | 0.8307 | 0.8299 |
| 0.3886 | 13.29 | 3800 | 0.4032 | 0.8337 | 0.8327 |
| 0.3908 | 13.99 | 4000 | 0.4155 | 0.8286 | 0.8275 |
| 0.3805 | 14.69 | 4200 | 0.3981 | 0.8373 | 0.8365 |
| 0.3858 | 15.38 | 4400 | 0.3991 | 0.8406 | 0.8398 |
| 0.3773 | 16.08 | 4600 | 0.4261 | 0.8269 | 0.8260 |
| 0.3766 | 16.78 | 4800 | 0.3980 | 0.8395 | 0.8384 |
| 0.3702 | 17.48 | 5000 | 0.4159 | 0.8308 | 0.8297 |
| 0.3686 | 18.18 | 5200 | 0.3865 | 0.8447 | 0.8439 |
| 0.3686 | 18.88 | 5400 | 0.3927 | 0.8438 | 0.8428 |
| 0.3676 | 19.58 | 5600 | 0.3750 | 0.8499 | 0.8492 |
| 0.3677 | 20.28 | 5800 | 0.3936 | 0.8416 | 0.8406 |
| 0.361 | 20.98 | 6000 | 0.3824 | 0.8480 | 0.8472 |
| 0.3623 | 21.68 | 6200 | 0.3847 | 0.8465 | 0.8457 |
| 0.357 | 22.38 | 6400 | 0.3775 | 0.8509 | 0.8501 |
| 0.3523 | 23.08 | 6600 | 0.3842 | 0.8468 | 0.8459 |
| 0.3569 | 23.78 | 6800 | 0.3878 | 0.8453 | 0.8444 |
| 0.3522 | 24.48 | 7000 | 0.3946 | 0.8433 | 0.8424 |
| 0.3545 | 25.17 | 7200 | 0.3804 | 0.8480 | 0.8470 |
| 0.3494 | 25.87 | 7400 | 0.3975 | 0.8419 | 0.8409 |
| 0.3513 | 26.57 | 7600 | 0.4007 | 0.8381 | 0.8371 |
| 0.3486 | 27.27 | 7800 | 0.3907 | 0.8449 | 0.8439 |
| 0.3518 | 27.97 | 8000 | 0.3822 | 0.8488 | 0.8479 |
| 0.3521 | 28.67 | 8200 | 0.3912 | 0.8424 | 0.8415 |
| 0.3421 | 29.37 | 8400 | 0.3821 | 0.8490 | 0.8481 |
| 0.342 | 30.07 | 8600 | 0.3762 | 0.8538 | 0.8529 |
| 0.343 | 30.77 | 8800 | 0.3842 | 0.8462 | 0.8452 |
| 0.3424 | 31.47 | 9000 | 0.3890 | 0.8444 | 0.8435 |
| 0.3462 | 32.17 | 9200 | 0.3835 | 0.8477 | 0.8468 |
| 0.34 | 32.87 | 9400 | 0.3844 | 0.8461 | 0.8452 |
| 0.3485 | 33.57 | 9600 | 0.3851 | 0.8451 | 0.8441 |
| 0.3398 | 34.27 | 9800 | 0.3819 | 0.8485 | 0.8477 |
| 0.3388 | 34.97 | 10000 | 0.3827 | 0.8479 | 0.8470 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_15M", "model-index": [{"name": "GUE_splice_reconstructed-seqsight_4096_512_15M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_splice_reconstructed-seqsight_4096_512_15M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_15M",
"region:us"
] | null | 2024-05-03T18:59:54+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us
| GUE\_splice\_reconstructed-seqsight\_4096\_512\_15M-L8\_f
=========================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_15M on the mahdibaghbanzadeh/GUE\_splice\_reconstructed dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3625
* F1 Score: 0.8558
* Accuracy: 0.8551
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
43,
100,
5,
52
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | janakipanneerselvam/SegFormer_Sunlit_nvidia_mit-b5_Almonds | null | [
"transformers",
"safetensors",
"segformer",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T19:00:07+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #segformer #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #segformer #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
31,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#transformers #safetensors #segformer #arxiv-1910.09700 #endpoints_compatible #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |
null | null |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_splice_reconstructed-seqsight_4096_512_15M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_15M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_15M) on the [mahdibaghbanzadeh/GUE_splice_reconstructed](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_splice_reconstructed) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3278
- F1 Score: 0.8773
- Accuracy: 0.8768
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.9437 | 0.7 | 200 | 0.7875 | 0.5777 | 0.6344 |
| 0.5833 | 1.4 | 400 | 0.5029 | 0.7792 | 0.7784 |
| 0.5022 | 2.1 | 600 | 0.4673 | 0.7985 | 0.7977 |
| 0.4574 | 2.8 | 800 | 0.4595 | 0.8028 | 0.8018 |
| 0.443 | 3.5 | 1000 | 0.4219 | 0.8222 | 0.8216 |
| 0.4304 | 4.2 | 1200 | 0.4373 | 0.8187 | 0.8176 |
| 0.4107 | 4.9 | 1400 | 0.4053 | 0.8347 | 0.8338 |
| 0.3993 | 5.59 | 1600 | 0.4180 | 0.8263 | 0.8260 |
| 0.3898 | 6.29 | 1800 | 0.3822 | 0.8464 | 0.8457 |
| 0.3746 | 6.99 | 2000 | 0.3862 | 0.8463 | 0.8455 |
| 0.3682 | 7.69 | 2200 | 0.3810 | 0.8463 | 0.8455 |
| 0.3631 | 8.39 | 2400 | 0.3731 | 0.8503 | 0.8496 |
| 0.3535 | 9.09 | 2600 | 0.3850 | 0.8482 | 0.8472 |
| 0.349 | 9.79 | 2800 | 0.3807 | 0.8468 | 0.8459 |
| 0.3446 | 10.49 | 3000 | 0.3597 | 0.8566 | 0.8560 |
| 0.3349 | 11.19 | 3200 | 0.3978 | 0.8461 | 0.8450 |
| 0.3372 | 11.89 | 3400 | 0.3640 | 0.8539 | 0.8531 |
| 0.3255 | 12.59 | 3600 | 0.3699 | 0.8528 | 0.8520 |
| 0.3194 | 13.29 | 3800 | 0.3573 | 0.8605 | 0.8597 |
| 0.3222 | 13.99 | 4000 | 0.3641 | 0.8592 | 0.8584 |
| 0.3105 | 14.69 | 4200 | 0.3593 | 0.8586 | 0.8577 |
| 0.3154 | 15.38 | 4400 | 0.3643 | 0.8591 | 0.8584 |
| 0.3092 | 16.08 | 4600 | 0.3741 | 0.8559 | 0.8551 |
| 0.3048 | 16.78 | 4800 | 0.3464 | 0.8668 | 0.8661 |
| 0.2974 | 17.48 | 5000 | 0.3718 | 0.8606 | 0.8597 |
| 0.296 | 18.18 | 5200 | 0.3517 | 0.8654 | 0.8648 |
| 0.2966 | 18.88 | 5400 | 0.3492 | 0.8677 | 0.8669 |
| 0.2939 | 19.58 | 5600 | 0.3418 | 0.8728 | 0.8722 |
| 0.2937 | 20.28 | 5800 | 0.3436 | 0.8730 | 0.8724 |
| 0.2906 | 20.98 | 6000 | 0.3409 | 0.8705 | 0.8698 |
| 0.2868 | 21.68 | 6200 | 0.3430 | 0.8715 | 0.8709 |
| 0.286 | 22.38 | 6400 | 0.3433 | 0.8735 | 0.8729 |
| 0.279 | 23.08 | 6600 | 0.3495 | 0.8727 | 0.8720 |
| 0.2834 | 23.78 | 6800 | 0.3377 | 0.8739 | 0.8733 |
| 0.2785 | 24.48 | 7000 | 0.3545 | 0.8652 | 0.8645 |
| 0.2772 | 25.17 | 7200 | 0.3397 | 0.8737 | 0.8731 |
| 0.2737 | 25.87 | 7400 | 0.3486 | 0.8709 | 0.8702 |
| 0.2735 | 26.57 | 7600 | 0.3572 | 0.8671 | 0.8663 |
| 0.2734 | 27.27 | 7800 | 0.3480 | 0.8720 | 0.8713 |
| 0.2741 | 27.97 | 8000 | 0.3442 | 0.8720 | 0.8713 |
| 0.2712 | 28.67 | 8200 | 0.3485 | 0.8712 | 0.8705 |
| 0.2625 | 29.37 | 8400 | 0.3386 | 0.8763 | 0.8757 |
| 0.2649 | 30.07 | 8600 | 0.3352 | 0.8765 | 0.8759 |
| 0.2664 | 30.77 | 8800 | 0.3495 | 0.8721 | 0.8713 |
| 0.2623 | 31.47 | 9000 | 0.3537 | 0.8694 | 0.8687 |
| 0.2672 | 32.17 | 9200 | 0.3429 | 0.8731 | 0.8724 |
| 0.2611 | 32.87 | 9400 | 0.3470 | 0.8724 | 0.8718 |
| 0.2676 | 33.57 | 9600 | 0.3451 | 0.8720 | 0.8713 |
| 0.26 | 34.27 | 9800 | 0.3423 | 0.8742 | 0.8735 |
| 0.2596 | 34.97 | 10000 | 0.3429 | 0.8731 | 0.8724 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_15M", "model-index": [{"name": "GUE_splice_reconstructed-seqsight_4096_512_15M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_splice_reconstructed-seqsight_4096_512_15M-L32_f | null | [
"region:us"
] | null | 2024-05-03T19:00:45+00:00 | [] | [] | TAGS
#region-us
| GUE\_splice\_reconstructed-seqsight\_4096\_512\_15M-L32\_f
==========================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_15M on the mahdibaghbanzadeh/GUE\_splice\_reconstructed dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3278
* F1 Score: 0.8773
* Accuracy: 0.8768
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
5,
100,
5,
52
] | [
"TAGS\n#region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_tf_0-seqsight_4096_512_15M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_15M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_15M) on the [mahdibaghbanzadeh/GUE_tf_0](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_0) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3955
- F1 Score: 0.8171
- Accuracy: 0.818
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.6058 | 0.79 | 200 | 0.5094 | 0.7695 | 0.771 |
| 0.5094 | 1.58 | 400 | 0.4971 | 0.7632 | 0.764 |
| 0.4956 | 2.37 | 600 | 0.4886 | 0.7586 | 0.76 |
| 0.485 | 3.16 | 800 | 0.4799 | 0.7769 | 0.777 |
| 0.4785 | 3.95 | 1000 | 0.4762 | 0.7733 | 0.774 |
| 0.4771 | 4.74 | 1200 | 0.4786 | 0.7689 | 0.769 |
| 0.4732 | 5.53 | 1400 | 0.4720 | 0.7728 | 0.773 |
| 0.4654 | 6.32 | 1600 | 0.4690 | 0.7730 | 0.773 |
| 0.469 | 7.11 | 1800 | 0.4772 | 0.7660 | 0.766 |
| 0.4692 | 7.91 | 2000 | 0.4683 | 0.7710 | 0.771 |
| 0.4631 | 8.7 | 2200 | 0.4663 | 0.7738 | 0.774 |
| 0.4615 | 9.49 | 2400 | 0.4637 | 0.7783 | 0.779 |
| 0.4635 | 10.28 | 2600 | 0.4755 | 0.7719 | 0.772 |
| 0.4589 | 11.07 | 2800 | 0.4693 | 0.7731 | 0.773 |
| 0.4594 | 11.86 | 3000 | 0.4667 | 0.7801 | 0.78 |
| 0.4585 | 12.65 | 3200 | 0.4688 | 0.7760 | 0.776 |
| 0.4564 | 13.44 | 3400 | 0.4634 | 0.7767 | 0.777 |
| 0.4584 | 14.23 | 3600 | 0.4613 | 0.7789 | 0.779 |
| 0.455 | 15.02 | 3800 | 0.4658 | 0.7781 | 0.778 |
| 0.4552 | 15.81 | 4000 | 0.4622 | 0.7751 | 0.775 |
| 0.4538 | 16.6 | 4200 | 0.4630 | 0.776 | 0.776 |
| 0.454 | 17.39 | 4400 | 0.4675 | 0.7770 | 0.777 |
| 0.4536 | 18.18 | 4600 | 0.4652 | 0.7761 | 0.776 |
| 0.4515 | 18.97 | 4800 | 0.4699 | 0.7749 | 0.775 |
| 0.4563 | 19.76 | 5000 | 0.4599 | 0.7760 | 0.776 |
| 0.4504 | 20.55 | 5200 | 0.4584 | 0.7817 | 0.782 |
| 0.4508 | 21.34 | 5400 | 0.4602 | 0.7771 | 0.777 |
| 0.4466 | 22.13 | 5600 | 0.4644 | 0.7761 | 0.776 |
| 0.4526 | 22.92 | 5800 | 0.4616 | 0.7751 | 0.775 |
| 0.447 | 23.72 | 6000 | 0.4645 | 0.7771 | 0.777 |
| 0.4493 | 24.51 | 6200 | 0.4586 | 0.7759 | 0.776 |
| 0.4531 | 25.3 | 6400 | 0.4613 | 0.7751 | 0.775 |
| 0.446 | 26.09 | 6600 | 0.4614 | 0.7761 | 0.776 |
| 0.4476 | 26.88 | 6800 | 0.4654 | 0.7790 | 0.779 |
| 0.4507 | 27.67 | 7000 | 0.4589 | 0.7770 | 0.777 |
| 0.4472 | 28.46 | 7200 | 0.4572 | 0.7797 | 0.78 |
| 0.4487 | 29.25 | 7400 | 0.4597 | 0.7780 | 0.778 |
| 0.4471 | 30.04 | 7600 | 0.4599 | 0.774 | 0.774 |
| 0.4475 | 30.83 | 7800 | 0.4593 | 0.7780 | 0.778 |
| 0.448 | 31.62 | 8000 | 0.4577 | 0.7739 | 0.774 |
| 0.4473 | 32.41 | 8200 | 0.4599 | 0.7771 | 0.777 |
| 0.4404 | 33.2 | 8400 | 0.4623 | 0.7771 | 0.777 |
| 0.4506 | 33.99 | 8600 | 0.4597 | 0.7750 | 0.775 |
| 0.4495 | 34.78 | 8800 | 0.4573 | 0.7788 | 0.779 |
| 0.4412 | 35.57 | 9000 | 0.4611 | 0.7791 | 0.779 |
| 0.4468 | 36.36 | 9200 | 0.4622 | 0.7780 | 0.778 |
| 0.4493 | 37.15 | 9400 | 0.4617 | 0.7780 | 0.778 |
| 0.4464 | 37.94 | 9600 | 0.4598 | 0.7781 | 0.778 |
| 0.4471 | 38.74 | 9800 | 0.4606 | 0.7781 | 0.778 |
| 0.4465 | 39.53 | 10000 | 0.4601 | 0.7791 | 0.779 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_15M", "model-index": [{"name": "GUE_tf_0-seqsight_4096_512_15M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_tf_0-seqsight_4096_512_15M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_15M",
"region:us"
] | null | 2024-05-03T19:00:55+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us
| GUE\_tf\_0-seqsight\_4096\_512\_15M-L1\_f
=========================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_15M on the mahdibaghbanzadeh/GUE\_tf\_0 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3955
* F1 Score: 0.8171
* Accuracy: 0.818
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
43,
100,
5,
52
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_tf_0-seqsight_4096_512_15M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_15M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_15M) on the [mahdibaghbanzadeh/GUE_tf_0](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_0) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3820
- F1 Score: 0.8253
- Accuracy: 0.826
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.564 | 0.79 | 200 | 0.4938 | 0.7628 | 0.764 |
| 0.4852 | 1.58 | 400 | 0.4781 | 0.7675 | 0.768 |
| 0.4814 | 2.37 | 600 | 0.4737 | 0.7699 | 0.771 |
| 0.4714 | 3.16 | 800 | 0.4691 | 0.7730 | 0.773 |
| 0.4662 | 3.95 | 1000 | 0.4684 | 0.7778 | 0.779 |
| 0.4652 | 4.74 | 1200 | 0.4719 | 0.7720 | 0.772 |
| 0.4602 | 5.53 | 1400 | 0.4762 | 0.7750 | 0.775 |
| 0.4531 | 6.32 | 1600 | 0.4656 | 0.7771 | 0.777 |
| 0.4566 | 7.11 | 1800 | 0.4750 | 0.7699 | 0.77 |
| 0.4543 | 7.91 | 2000 | 0.4682 | 0.7759 | 0.776 |
| 0.4493 | 8.7 | 2200 | 0.4629 | 0.7758 | 0.776 |
| 0.4463 | 9.49 | 2400 | 0.4588 | 0.7816 | 0.782 |
| 0.4487 | 10.28 | 2600 | 0.4744 | 0.7758 | 0.776 |
| 0.443 | 11.07 | 2800 | 0.4710 | 0.7716 | 0.772 |
| 0.4431 | 11.86 | 3000 | 0.4668 | 0.7739 | 0.774 |
| 0.44 | 12.65 | 3200 | 0.4705 | 0.7729 | 0.773 |
| 0.4395 | 13.44 | 3400 | 0.4618 | 0.7819 | 0.782 |
| 0.4389 | 14.23 | 3600 | 0.4595 | 0.7800 | 0.78 |
| 0.4367 | 15.02 | 3800 | 0.4655 | 0.7810 | 0.781 |
| 0.4365 | 15.81 | 4000 | 0.4595 | 0.7760 | 0.776 |
| 0.4339 | 16.6 | 4200 | 0.4603 | 0.7829 | 0.783 |
| 0.4332 | 17.39 | 4400 | 0.4745 | 0.7678 | 0.768 |
| 0.4336 | 18.18 | 4600 | 0.4630 | 0.7731 | 0.773 |
| 0.4308 | 18.97 | 4800 | 0.4697 | 0.7710 | 0.771 |
| 0.4343 | 19.76 | 5000 | 0.4621 | 0.7751 | 0.775 |
| 0.4306 | 20.55 | 5200 | 0.4580 | 0.7829 | 0.783 |
| 0.4279 | 21.34 | 5400 | 0.4651 | 0.7760 | 0.776 |
| 0.4256 | 22.13 | 5600 | 0.4676 | 0.7791 | 0.779 |
| 0.4295 | 22.92 | 5800 | 0.4676 | 0.7681 | 0.768 |
| 0.4238 | 23.72 | 6000 | 0.4683 | 0.7741 | 0.774 |
| 0.426 | 24.51 | 6200 | 0.4588 | 0.7770 | 0.777 |
| 0.4284 | 25.3 | 6400 | 0.4627 | 0.7761 | 0.776 |
| 0.4231 | 26.09 | 6600 | 0.4634 | 0.7810 | 0.781 |
| 0.4252 | 26.88 | 6800 | 0.4691 | 0.7720 | 0.772 |
| 0.4266 | 27.67 | 7000 | 0.4623 | 0.7790 | 0.779 |
| 0.4242 | 28.46 | 7200 | 0.4589 | 0.7808 | 0.781 |
| 0.424 | 29.25 | 7400 | 0.4646 | 0.7751 | 0.775 |
| 0.4238 | 30.04 | 7600 | 0.4670 | 0.7741 | 0.774 |
| 0.4219 | 30.83 | 7800 | 0.4646 | 0.7760 | 0.776 |
| 0.4235 | 31.62 | 8000 | 0.4620 | 0.7829 | 0.783 |
| 0.4231 | 32.41 | 8200 | 0.4656 | 0.7741 | 0.774 |
| 0.4158 | 33.2 | 8400 | 0.4681 | 0.7731 | 0.773 |
| 0.4244 | 33.99 | 8600 | 0.4629 | 0.7780 | 0.778 |
| 0.4233 | 34.78 | 8800 | 0.4610 | 0.7770 | 0.777 |
| 0.4153 | 35.57 | 9000 | 0.4647 | 0.7771 | 0.777 |
| 0.4203 | 36.36 | 9200 | 0.4694 | 0.7730 | 0.773 |
| 0.4236 | 37.15 | 9400 | 0.4657 | 0.7790 | 0.779 |
| 0.4194 | 37.94 | 9600 | 0.4650 | 0.7731 | 0.773 |
| 0.4205 | 38.74 | 9800 | 0.4657 | 0.7731 | 0.773 |
| 0.4202 | 39.53 | 10000 | 0.4650 | 0.7741 | 0.774 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_15M", "model-index": [{"name": "GUE_tf_0-seqsight_4096_512_15M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_tf_0-seqsight_4096_512_15M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_15M",
"region:us"
] | null | 2024-05-03T19:01:36+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us
| GUE\_tf\_0-seqsight\_4096\_512\_15M-L8\_f
=========================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_15M on the mahdibaghbanzadeh/GUE\_tf\_0 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3820
* F1 Score: 0.8253
* Accuracy: 0.826
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
43,
100,
5,
52
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text-generation | transformers |
# mlx-community/codegemma-1.1-7b-it-4bit
This model was converted to MLX format from [`google/codegemma-1.1-7b-it`]() using mlx-lm version **0.12.0**.
Refer to the [original model card](https://huggingface.co/google/codegemma-1.1-7b-it) for more details on the model.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/codegemma-1.1-7b-it-4bit")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```
| {"license": "gemma", "library_name": "transformers", "tags": ["mlx"], "extra_gated_heading": "Access CodeGemma on Hugging Face", "extra_gated_prompt": "To access CodeGemma on Hugging Face, you\u2019re required to review and agree to Google\u2019s usage license. To do this, please ensure you\u2019re logged-in to Hugging Face and click below. Requests are processed immediately.", "extra_gated_button_content": "Acknowledge license", "pipeline_tag": "text-generation", "widget": [{"text": "<start_of_turn>user Write a Python function to calculate the nth fibonacci number.<end_of_turn> <start_of_turn>model\n"}], "inference": {"parameters": {"max_new_tokens": 200}}, "license_link": "https://ai.google.dev/gemma/terms"} | mlx-community/codegemma-1.1-7b-it-4bit | null | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"mlx",
"license:gemma",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-03T19:02:41+00:00 | [] | [] | TAGS
#transformers #safetensors #gemma #text-generation #mlx #license-gemma #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# mlx-community/codegemma-1.1-7b-it-4bit
This model was converted to MLX format from ['google/codegemma-1.1-7b-it']() using mlx-lm version 0.12.0.
Refer to the original model card for more details on the model.
## Use with mlx
| [
"# mlx-community/codegemma-1.1-7b-it-4bit\nThis model was converted to MLX format from ['google/codegemma-1.1-7b-it']() using mlx-lm version 0.12.0.\nRefer to the original model card for more details on the model.",
"## Use with mlx"
] | [
"TAGS\n#transformers #safetensors #gemma #text-generation #mlx #license-gemma #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# mlx-community/codegemma-1.1-7b-it-4bit\nThis model was converted to MLX format from ['google/codegemma-1.1-7b-it']() using mlx-lm version 0.12.0.\nRefer to the original model card for more details on the model.",
"## Use with mlx"
] | [
40,
76,
6
] | [
"TAGS\n#transformers #safetensors #gemma #text-generation #mlx #license-gemma #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# mlx-community/codegemma-1.1-7b-it-4bit\nThis model was converted to MLX format from ['google/codegemma-1.1-7b-it']() using mlx-lm version 0.12.0.\nRefer to the original model card for more details on the model.## Use with mlx"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_tf_0-seqsight_4096_512_15M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_15M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_15M) on the [mahdibaghbanzadeh/GUE_tf_0](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_0) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3822
- F1 Score: 0.8231
- Accuracy: 0.824
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5442 | 0.79 | 200 | 0.4868 | 0.7689 | 0.77 |
| 0.4786 | 1.58 | 400 | 0.4719 | 0.7698 | 0.77 |
| 0.474 | 2.37 | 600 | 0.4703 | 0.7744 | 0.775 |
| 0.462 | 3.16 | 800 | 0.4675 | 0.7790 | 0.779 |
| 0.4553 | 3.95 | 1000 | 0.4637 | 0.7710 | 0.772 |
| 0.4525 | 4.74 | 1200 | 0.4660 | 0.7771 | 0.777 |
| 0.4466 | 5.53 | 1400 | 0.4745 | 0.7746 | 0.775 |
| 0.4392 | 6.32 | 1600 | 0.4622 | 0.7780 | 0.778 |
| 0.443 | 7.11 | 1800 | 0.4753 | 0.7676 | 0.768 |
| 0.4393 | 7.91 | 2000 | 0.4669 | 0.7759 | 0.776 |
| 0.4334 | 8.7 | 2200 | 0.4666 | 0.7811 | 0.781 |
| 0.431 | 9.49 | 2400 | 0.4610 | 0.7738 | 0.774 |
| 0.4338 | 10.28 | 2600 | 0.4822 | 0.7694 | 0.77 |
| 0.4258 | 11.07 | 2800 | 0.4797 | 0.7674 | 0.768 |
| 0.4259 | 11.86 | 3000 | 0.4702 | 0.7729 | 0.773 |
| 0.4227 | 12.65 | 3200 | 0.4685 | 0.7850 | 0.785 |
| 0.4211 | 13.44 | 3400 | 0.4679 | 0.7760 | 0.776 |
| 0.4194 | 14.23 | 3600 | 0.4713 | 0.7650 | 0.765 |
| 0.4184 | 15.02 | 3800 | 0.4779 | 0.7768 | 0.777 |
| 0.4151 | 15.81 | 4000 | 0.4566 | 0.7780 | 0.778 |
| 0.4124 | 16.6 | 4200 | 0.4664 | 0.7790 | 0.779 |
| 0.4129 | 17.39 | 4400 | 0.4694 | 0.7828 | 0.783 |
| 0.4114 | 18.18 | 4600 | 0.4705 | 0.7790 | 0.779 |
| 0.4084 | 18.97 | 4800 | 0.4758 | 0.7790 | 0.779 |
| 0.41 | 19.76 | 5000 | 0.4642 | 0.7761 | 0.776 |
| 0.4056 | 20.55 | 5200 | 0.4634 | 0.7860 | 0.786 |
| 0.4043 | 21.34 | 5400 | 0.4719 | 0.7770 | 0.777 |
| 0.4016 | 22.13 | 5600 | 0.4722 | 0.7861 | 0.786 |
| 0.4019 | 22.92 | 5800 | 0.4778 | 0.7780 | 0.778 |
| 0.3985 | 23.72 | 6000 | 0.4769 | 0.7809 | 0.781 |
| 0.3981 | 24.51 | 6200 | 0.4672 | 0.7841 | 0.784 |
| 0.4013 | 25.3 | 6400 | 0.4766 | 0.7770 | 0.777 |
| 0.3957 | 26.09 | 6600 | 0.4738 | 0.7720 | 0.772 |
| 0.3963 | 26.88 | 6800 | 0.4767 | 0.7771 | 0.777 |
| 0.3971 | 27.67 | 7000 | 0.4753 | 0.7811 | 0.781 |
| 0.3938 | 28.46 | 7200 | 0.4698 | 0.7810 | 0.781 |
| 0.3937 | 29.25 | 7400 | 0.4782 | 0.7870 | 0.787 |
| 0.3945 | 30.04 | 7600 | 0.4739 | 0.782 | 0.782 |
| 0.3906 | 30.83 | 7800 | 0.4763 | 0.7761 | 0.776 |
| 0.3911 | 31.62 | 8000 | 0.4681 | 0.7881 | 0.788 |
| 0.3912 | 32.41 | 8200 | 0.4775 | 0.7800 | 0.78 |
| 0.3821 | 33.2 | 8400 | 0.4849 | 0.7760 | 0.776 |
| 0.3923 | 33.99 | 8600 | 0.4778 | 0.7820 | 0.782 |
| 0.3908 | 34.78 | 8800 | 0.4751 | 0.7831 | 0.783 |
| 0.383 | 35.57 | 9000 | 0.4811 | 0.7790 | 0.779 |
| 0.3862 | 36.36 | 9200 | 0.4889 | 0.7779 | 0.778 |
| 0.3904 | 37.15 | 9400 | 0.4837 | 0.7769 | 0.777 |
| 0.3859 | 37.94 | 9600 | 0.4804 | 0.7780 | 0.778 |
| 0.3847 | 38.74 | 9800 | 0.4822 | 0.7810 | 0.781 |
| 0.3863 | 39.53 | 10000 | 0.4812 | 0.7810 | 0.781 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_15M", "model-index": [{"name": "GUE_tf_0-seqsight_4096_512_15M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_tf_0-seqsight_4096_512_15M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_15M",
"region:us"
] | null | 2024-05-03T19:03:05+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us
| GUE\_tf\_0-seqsight\_4096\_512\_15M-L32\_f
==========================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_15M on the mahdibaghbanzadeh/GUE\_tf\_0 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3822
* F1 Score: 0.8231
* Accuracy: 0.824
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
43,
100,
5,
52
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_tf_1-seqsight_4096_512_15M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_15M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_15M) on the [mahdibaghbanzadeh/GUE_tf_1](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_1) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3572
- F1 Score: 0.8444
- Accuracy: 0.845
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.6122 | 0.83 | 200 | 0.5520 | 0.7117 | 0.713 |
| 0.5255 | 1.67 | 400 | 0.5377 | 0.7199 | 0.72 |
| 0.5074 | 2.5 | 600 | 0.5309 | 0.7250 | 0.725 |
| 0.4984 | 3.33 | 800 | 0.5262 | 0.7406 | 0.741 |
| 0.4977 | 4.17 | 1000 | 0.5190 | 0.7370 | 0.737 |
| 0.4934 | 5.0 | 1200 | 0.5188 | 0.7430 | 0.743 |
| 0.4875 | 5.83 | 1400 | 0.5129 | 0.7470 | 0.747 |
| 0.4909 | 6.67 | 1600 | 0.5163 | 0.7456 | 0.746 |
| 0.4879 | 7.5 | 1800 | 0.5117 | 0.7440 | 0.744 |
| 0.4839 | 8.33 | 2000 | 0.5103 | 0.7409 | 0.741 |
| 0.483 | 9.17 | 2200 | 0.5086 | 0.7478 | 0.748 |
| 0.4839 | 10.0 | 2400 | 0.5117 | 0.7427 | 0.743 |
| 0.483 | 10.83 | 2600 | 0.5129 | 0.7423 | 0.743 |
| 0.4826 | 11.67 | 2800 | 0.5140 | 0.7481 | 0.749 |
| 0.4732 | 12.5 | 3000 | 0.5130 | 0.7490 | 0.75 |
| 0.487 | 13.33 | 3200 | 0.5049 | 0.7467 | 0.747 |
| 0.4763 | 14.17 | 3400 | 0.5032 | 0.75 | 0.75 |
| 0.4759 | 15.0 | 3600 | 0.5158 | 0.7463 | 0.748 |
| 0.4765 | 15.83 | 3800 | 0.5033 | 0.7539 | 0.754 |
| 0.4795 | 16.67 | 4000 | 0.5095 | 0.7493 | 0.75 |
| 0.4729 | 17.5 | 4200 | 0.5042 | 0.7514 | 0.752 |
| 0.472 | 18.33 | 4400 | 0.5125 | 0.7456 | 0.746 |
| 0.4789 | 19.17 | 4600 | 0.5084 | 0.7490 | 0.75 |
| 0.4773 | 20.0 | 4800 | 0.5030 | 0.7508 | 0.751 |
| 0.4735 | 20.83 | 5000 | 0.5052 | 0.7548 | 0.755 |
| 0.4731 | 21.67 | 5200 | 0.5012 | 0.7499 | 0.75 |
| 0.4711 | 22.5 | 5400 | 0.5037 | 0.7479 | 0.748 |
| 0.4721 | 23.33 | 5600 | 0.5040 | 0.7514 | 0.752 |
| 0.4697 | 24.17 | 5800 | 0.5083 | 0.7523 | 0.753 |
| 0.4732 | 25.0 | 6000 | 0.5022 | 0.7509 | 0.751 |
| 0.4703 | 25.83 | 6200 | 0.5019 | 0.7506 | 0.751 |
| 0.4715 | 26.67 | 6400 | 0.5005 | 0.7480 | 0.748 |
| 0.4741 | 27.5 | 6600 | 0.5019 | 0.7518 | 0.752 |
| 0.4664 | 28.33 | 6800 | 0.5025 | 0.7537 | 0.754 |
| 0.4697 | 29.17 | 7000 | 0.5011 | 0.7547 | 0.755 |
| 0.47 | 30.0 | 7200 | 0.5004 | 0.7537 | 0.754 |
| 0.4698 | 30.83 | 7400 | 0.5049 | 0.7468 | 0.748 |
| 0.4684 | 31.67 | 7600 | 0.4991 | 0.7547 | 0.755 |
| 0.4707 | 32.5 | 7800 | 0.4990 | 0.7537 | 0.754 |
| 0.4647 | 33.33 | 8000 | 0.5003 | 0.7527 | 0.753 |
| 0.4716 | 34.17 | 8200 | 0.4997 | 0.7547 | 0.755 |
| 0.4642 | 35.0 | 8400 | 0.4997 | 0.7509 | 0.751 |
| 0.4682 | 35.83 | 8600 | 0.4997 | 0.7557 | 0.756 |
| 0.4682 | 36.67 | 8800 | 0.4991 | 0.7577 | 0.758 |
| 0.4641 | 37.5 | 9000 | 0.5043 | 0.7472 | 0.748 |
| 0.4681 | 38.33 | 9200 | 0.5000 | 0.7508 | 0.751 |
| 0.4682 | 39.17 | 9400 | 0.5003 | 0.7527 | 0.753 |
| 0.4676 | 40.0 | 9600 | 0.5008 | 0.7545 | 0.755 |
| 0.467 | 40.83 | 9800 | 0.4999 | 0.7508 | 0.751 |
| 0.4663 | 41.67 | 10000 | 0.5000 | 0.7517 | 0.752 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_15M", "model-index": [{"name": "GUE_tf_1-seqsight_4096_512_15M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_tf_1-seqsight_4096_512_15M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_15M",
"region:us"
] | null | 2024-05-03T19:03:06+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us
| GUE\_tf\_1-seqsight\_4096\_512\_15M-L1\_f
=========================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_15M on the mahdibaghbanzadeh/GUE\_tf\_1 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3572
* F1 Score: 0.8444
* Accuracy: 0.845
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
43,
100,
5,
52
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_tf_1-seqsight_4096_512_15M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_15M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_15M) on the [mahdibaghbanzadeh/GUE_tf_1](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_1) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3564
- F1 Score: 0.8497
- Accuracy: 0.85
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5795 | 0.83 | 200 | 0.5363 | 0.7282 | 0.729 |
| 0.5027 | 1.67 | 400 | 0.5199 | 0.7328 | 0.733 |
| 0.4941 | 2.5 | 600 | 0.5148 | 0.7388 | 0.739 |
| 0.4863 | 3.33 | 800 | 0.5092 | 0.7449 | 0.745 |
| 0.4866 | 4.17 | 1000 | 0.5069 | 0.7498 | 0.75 |
| 0.4815 | 5.0 | 1200 | 0.5068 | 0.7549 | 0.755 |
| 0.4743 | 5.83 | 1400 | 0.5016 | 0.7547 | 0.755 |
| 0.4779 | 6.67 | 1600 | 0.5069 | 0.7473 | 0.748 |
| 0.4747 | 7.5 | 1800 | 0.5009 | 0.7590 | 0.759 |
| 0.4702 | 8.33 | 2000 | 0.4978 | 0.7560 | 0.756 |
| 0.4687 | 9.17 | 2200 | 0.4952 | 0.7559 | 0.756 |
| 0.4681 | 10.0 | 2400 | 0.5077 | 0.7407 | 0.742 |
| 0.4676 | 10.83 | 2600 | 0.5064 | 0.7450 | 0.747 |
| 0.4655 | 11.67 | 2800 | 0.5004 | 0.7529 | 0.754 |
| 0.4556 | 12.5 | 3000 | 0.5048 | 0.7503 | 0.752 |
| 0.4698 | 13.33 | 3200 | 0.4985 | 0.7533 | 0.754 |
| 0.4578 | 14.17 | 3400 | 0.4922 | 0.7499 | 0.75 |
| 0.4565 | 15.0 | 3600 | 0.5020 | 0.7496 | 0.751 |
| 0.455 | 15.83 | 3800 | 0.4908 | 0.7665 | 0.767 |
| 0.4587 | 16.67 | 4000 | 0.4998 | 0.7512 | 0.752 |
| 0.4521 | 17.5 | 4200 | 0.4939 | 0.7563 | 0.757 |
| 0.4506 | 18.33 | 4400 | 0.4958 | 0.7626 | 0.763 |
| 0.4572 | 19.17 | 4600 | 0.4979 | 0.7538 | 0.755 |
| 0.4545 | 20.0 | 4800 | 0.4925 | 0.7598 | 0.76 |
| 0.4517 | 20.83 | 5000 | 0.4961 | 0.7578 | 0.758 |
| 0.449 | 21.67 | 5200 | 0.4891 | 0.7588 | 0.759 |
| 0.4475 | 22.5 | 5400 | 0.4911 | 0.7658 | 0.766 |
| 0.4504 | 23.33 | 5600 | 0.4953 | 0.7520 | 0.753 |
| 0.4481 | 24.17 | 5800 | 0.4969 | 0.7571 | 0.758 |
| 0.4497 | 25.0 | 6000 | 0.4905 | 0.7618 | 0.762 |
| 0.4456 | 25.83 | 6200 | 0.4922 | 0.7574 | 0.758 |
| 0.447 | 26.67 | 6400 | 0.4876 | 0.7590 | 0.759 |
| 0.4495 | 27.5 | 6600 | 0.4930 | 0.7579 | 0.758 |
| 0.4409 | 28.33 | 6800 | 0.4908 | 0.7520 | 0.752 |
| 0.4444 | 29.17 | 7000 | 0.4898 | 0.7568 | 0.757 |
| 0.4448 | 30.0 | 7200 | 0.4911 | 0.7507 | 0.751 |
| 0.4457 | 30.83 | 7400 | 0.4959 | 0.7586 | 0.76 |
| 0.4421 | 31.67 | 7600 | 0.4910 | 0.7578 | 0.758 |
| 0.4467 | 32.5 | 7800 | 0.4899 | 0.7528 | 0.753 |
| 0.4392 | 33.33 | 8000 | 0.4893 | 0.7559 | 0.756 |
| 0.4449 | 34.17 | 8200 | 0.4920 | 0.7607 | 0.761 |
| 0.4386 | 35.0 | 8400 | 0.4902 | 0.7549 | 0.755 |
| 0.4414 | 35.83 | 8600 | 0.4890 | 0.7588 | 0.759 |
| 0.4424 | 36.67 | 8800 | 0.4900 | 0.7587 | 0.759 |
| 0.4385 | 37.5 | 9000 | 0.4957 | 0.7608 | 0.762 |
| 0.4414 | 38.33 | 9200 | 0.4900 | 0.7528 | 0.753 |
| 0.4421 | 39.17 | 9400 | 0.4894 | 0.7587 | 0.759 |
| 0.4408 | 40.0 | 9600 | 0.4911 | 0.7585 | 0.759 |
| 0.4398 | 40.83 | 9800 | 0.4891 | 0.7569 | 0.757 |
| 0.4397 | 41.67 | 10000 | 0.4895 | 0.7578 | 0.758 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_15M", "model-index": [{"name": "GUE_tf_1-seqsight_4096_512_15M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_tf_1-seqsight_4096_512_15M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_15M",
"region:us"
] | null | 2024-05-03T19:03:42+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us
| GUE\_tf\_1-seqsight\_4096\_512\_15M-L8\_f
=========================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_15M on the mahdibaghbanzadeh/GUE\_tf\_1 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3564
* F1 Score: 0.8497
* Accuracy: 0.85
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
43,
100,
5,
52
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | ShakhzoDavronov/wav2vec2-base-asr-uz | null | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T19:03:53+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
22,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#transformers #arxiv-1910.09700 #endpoints_compatible #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_tf_1-seqsight_4096_512_15M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_15M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_15M) on the [mahdibaghbanzadeh/GUE_tf_1](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_1) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3672
- F1 Score: 0.8397
- Accuracy: 0.84
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5598 | 0.83 | 200 | 0.5336 | 0.7225 | 0.725 |
| 0.497 | 1.67 | 400 | 0.5164 | 0.7355 | 0.736 |
| 0.488 | 2.5 | 600 | 0.5110 | 0.7427 | 0.743 |
| 0.4796 | 3.33 | 800 | 0.5018 | 0.7480 | 0.748 |
| 0.4781 | 4.17 | 1000 | 0.4984 | 0.7543 | 0.755 |
| 0.4707 | 5.0 | 1200 | 0.4983 | 0.7524 | 0.753 |
| 0.4633 | 5.83 | 1400 | 0.4924 | 0.7587 | 0.759 |
| 0.4651 | 6.67 | 1600 | 0.4974 | 0.7575 | 0.758 |
| 0.461 | 7.5 | 1800 | 0.4896 | 0.7530 | 0.753 |
| 0.4546 | 8.33 | 2000 | 0.4871 | 0.7520 | 0.752 |
| 0.4557 | 9.17 | 2200 | 0.4878 | 0.7580 | 0.758 |
| 0.4528 | 10.0 | 2400 | 0.5009 | 0.7536 | 0.755 |
| 0.4518 | 10.83 | 2600 | 0.4983 | 0.7643 | 0.766 |
| 0.448 | 11.67 | 2800 | 0.4872 | 0.7632 | 0.764 |
| 0.4383 | 12.5 | 3000 | 0.4980 | 0.7536 | 0.756 |
| 0.45 | 13.33 | 3200 | 0.4950 | 0.7545 | 0.755 |
| 0.4399 | 14.17 | 3400 | 0.4871 | 0.7568 | 0.757 |
| 0.4376 | 15.0 | 3600 | 0.4921 | 0.7626 | 0.764 |
| 0.4335 | 15.83 | 3800 | 0.4854 | 0.7635 | 0.764 |
| 0.4385 | 16.67 | 4000 | 0.4883 | 0.7687 | 0.769 |
| 0.4312 | 17.5 | 4200 | 0.4865 | 0.7705 | 0.771 |
| 0.4283 | 18.33 | 4400 | 0.4901 | 0.7687 | 0.769 |
| 0.4335 | 19.17 | 4600 | 0.4903 | 0.7618 | 0.763 |
| 0.4289 | 20.0 | 4800 | 0.4864 | 0.7658 | 0.766 |
| 0.4278 | 20.83 | 5000 | 0.4913 | 0.7576 | 0.758 |
| 0.4233 | 21.67 | 5200 | 0.4827 | 0.7589 | 0.759 |
| 0.4206 | 22.5 | 5400 | 0.4899 | 0.7586 | 0.759 |
| 0.423 | 23.33 | 5600 | 0.4898 | 0.7641 | 0.765 |
| 0.4215 | 24.17 | 5800 | 0.4892 | 0.7665 | 0.767 |
| 0.4221 | 25.0 | 6000 | 0.4869 | 0.7660 | 0.766 |
| 0.4154 | 25.83 | 6200 | 0.4917 | 0.7571 | 0.758 |
| 0.4184 | 26.67 | 6400 | 0.4817 | 0.7679 | 0.768 |
| 0.4207 | 27.5 | 6600 | 0.4918 | 0.7620 | 0.762 |
| 0.4123 | 28.33 | 6800 | 0.4898 | 0.7630 | 0.763 |
| 0.4144 | 29.17 | 7000 | 0.4853 | 0.7689 | 0.769 |
| 0.4137 | 30.0 | 7200 | 0.4866 | 0.7579 | 0.758 |
| 0.4125 | 30.83 | 7400 | 0.4911 | 0.7602 | 0.761 |
| 0.41 | 31.67 | 7600 | 0.4887 | 0.7699 | 0.77 |
| 0.4125 | 32.5 | 7800 | 0.4877 | 0.7599 | 0.76 |
| 0.4056 | 33.33 | 8000 | 0.4875 | 0.7638 | 0.764 |
| 0.4108 | 34.17 | 8200 | 0.4858 | 0.7658 | 0.766 |
| 0.4069 | 35.0 | 8400 | 0.4888 | 0.7670 | 0.767 |
| 0.4083 | 35.83 | 8600 | 0.4846 | 0.7739 | 0.774 |
| 0.4073 | 36.67 | 8800 | 0.4859 | 0.7697 | 0.77 |
| 0.4035 | 37.5 | 9000 | 0.4917 | 0.7625 | 0.763 |
| 0.4053 | 38.33 | 9200 | 0.4890 | 0.7679 | 0.768 |
| 0.4067 | 39.17 | 9400 | 0.4876 | 0.7699 | 0.77 |
| 0.4053 | 40.0 | 9600 | 0.4891 | 0.7657 | 0.766 |
| 0.4026 | 40.83 | 9800 | 0.4878 | 0.7679 | 0.768 |
| 0.4014 | 41.67 | 10000 | 0.4883 | 0.7678 | 0.768 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_15M", "model-index": [{"name": "GUE_tf_1-seqsight_4096_512_15M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_tf_1-seqsight_4096_512_15M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_15M",
"region:us"
] | null | 2024-05-03T19:04:23+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us
| GUE\_tf\_1-seqsight\_4096\_512\_15M-L32\_f
==========================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_15M on the mahdibaghbanzadeh/GUE\_tf\_1 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3672
* F1 Score: 0.8397
* Accuracy: 0.84
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
43,
100,
5,
52
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | ferrazzipietro/LS_Llama-2-7b-hf_adapters_en.layer1_NoQuant_16_64_0.01_8_0.0002 | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T19:04:42+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
26,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |
null | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut_synDB_base
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2303
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 5
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 10
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 24
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.9617 | 0.45 | 24 | 1.3561 |
| 1.1972 | 0.68 | 36 | 0.6411 |
| 0.7063 | 0.91 | 48 | 0.4467 |
| 0.5082 | 1.13 | 60 | 0.3735 |
| 0.3655 | 1.36 | 72 | 0.3117 |
| 0.2861 | 1.58 | 84 | 0.2874 |
| 0.274 | 1.81 | 96 | 0.2492 |
| 0.2308 | 2.04 | 108 | 0.2510 |
| 0.1745 | 2.26 | 120 | 0.2350 |
| 0.1594 | 2.49 | 132 | 0.2334 |
| 0.1471 | 2.72 | 144 | 0.2414 |
| 0.1777 | 2.94 | 156 | 0.2159 |
| 0.1401 | 3.17 | 168 | 0.2359 |
| 0.1008 | 3.4 | 180 | 0.2389 |
| 0.1038 | 3.62 | 192 | 0.2346 |
| 0.1251 | 3.85 | 204 | 0.2303 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["imagefolder"], "base_model": "naver-clova-ix/donut-base", "model-index": [{"name": "donut_synDB_base", "results": []}]} | Donut01/donut_synDB_base | null | [
"transformers",
"tensorboard",
"safetensors",
"vision-encoder-decoder",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T19:05:18+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #vision-encoder-decoder #endpoints_compatible #region-us
| donut\_synDB\_base
==================
This model is a fine-tuned version of naver-clova-ix/donut-base on the imagefolder dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2303
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 6e-05
* train\_batch\_size: 5
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 10
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 24
* num\_epochs: 4
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.38.2
* Pytorch 2.2.2+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 6e-05\n* train\\_batch\\_size: 5\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 10\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 24\n* num\\_epochs: 4\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #vision-encoder-decoder #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 6e-05\n* train\\_batch\\_size: 5\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 10\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 24\n* num\\_epochs: 4\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
28,
151,
5,
44
] | [
"TAGS\n#transformers #tensorboard #safetensors #vision-encoder-decoder #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 6e-05\n* train\\_batch\\_size: 5\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 10\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 24\n* num\\_epochs: 4\n* mixed\\_precision\\_training: Native AMP### Training results### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_tf_4-seqsight_4096_512_15M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_15M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_15M) on the [mahdibaghbanzadeh/GUE_tf_4](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_4) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3786
- F1 Score: 0.8268
- Accuracy: 0.828
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.6103 | 1.34 | 200 | 0.5528 | 0.7220 | 0.722 |
| 0.5121 | 2.68 | 400 | 0.5260 | 0.7171 | 0.719 |
| 0.4937 | 4.03 | 600 | 0.5191 | 0.7198 | 0.722 |
| 0.4848 | 5.37 | 800 | 0.5120 | 0.7330 | 0.733 |
| 0.4759 | 6.71 | 1000 | 0.5128 | 0.7358 | 0.737 |
| 0.4736 | 8.05 | 1200 | 0.5054 | 0.7410 | 0.741 |
| 0.4722 | 9.4 | 1400 | 0.5011 | 0.7393 | 0.74 |
| 0.468 | 10.74 | 1600 | 0.5104 | 0.7402 | 0.743 |
| 0.4641 | 12.08 | 1800 | 0.5051 | 0.7509 | 0.752 |
| 0.4627 | 13.42 | 2000 | 0.4937 | 0.7517 | 0.752 |
| 0.4553 | 14.77 | 2200 | 0.4941 | 0.7610 | 0.761 |
| 0.4575 | 16.11 | 2400 | 0.4892 | 0.7505 | 0.751 |
| 0.4516 | 17.45 | 2600 | 0.4884 | 0.7609 | 0.761 |
| 0.45 | 18.79 | 2800 | 0.4898 | 0.7610 | 0.761 |
| 0.4505 | 20.13 | 3000 | 0.4834 | 0.7610 | 0.761 |
| 0.4453 | 21.48 | 3200 | 0.4831 | 0.7608 | 0.761 |
| 0.4413 | 22.82 | 3400 | 0.4865 | 0.7655 | 0.766 |
| 0.4411 | 24.16 | 3600 | 0.4814 | 0.7609 | 0.761 |
| 0.442 | 25.5 | 3800 | 0.4816 | 0.7613 | 0.762 |
| 0.4367 | 26.85 | 4000 | 0.4785 | 0.7627 | 0.763 |
| 0.438 | 28.19 | 4200 | 0.4775 | 0.7619 | 0.762 |
| 0.4334 | 29.53 | 4400 | 0.4820 | 0.7619 | 0.762 |
| 0.4363 | 30.87 | 4600 | 0.4756 | 0.7588 | 0.759 |
| 0.4359 | 32.21 | 4800 | 0.4749 | 0.7685 | 0.769 |
| 0.4317 | 33.56 | 5000 | 0.4782 | 0.7655 | 0.766 |
| 0.4349 | 34.9 | 5200 | 0.4752 | 0.7629 | 0.763 |
| 0.4349 | 36.24 | 5400 | 0.4750 | 0.7648 | 0.765 |
| 0.4273 | 37.58 | 5600 | 0.4740 | 0.7697 | 0.77 |
| 0.4297 | 38.93 | 5800 | 0.4709 | 0.7610 | 0.761 |
| 0.4299 | 40.27 | 6000 | 0.4738 | 0.7678 | 0.768 |
| 0.4262 | 41.61 | 6200 | 0.4744 | 0.7746 | 0.775 |
| 0.4297 | 42.95 | 6400 | 0.4697 | 0.7679 | 0.768 |
| 0.4307 | 44.3 | 6600 | 0.4700 | 0.7737 | 0.774 |
| 0.4229 | 45.64 | 6800 | 0.4703 | 0.7649 | 0.765 |
| 0.4306 | 46.98 | 7000 | 0.4694 | 0.7659 | 0.766 |
| 0.4281 | 48.32 | 7200 | 0.4675 | 0.7698 | 0.77 |
| 0.4223 | 49.66 | 7400 | 0.4683 | 0.7640 | 0.764 |
| 0.4251 | 51.01 | 7600 | 0.4663 | 0.7728 | 0.773 |
| 0.4226 | 52.35 | 7800 | 0.4696 | 0.7737 | 0.774 |
| 0.4212 | 53.69 | 8000 | 0.4690 | 0.7669 | 0.767 |
| 0.425 | 55.03 | 8200 | 0.4684 | 0.7650 | 0.765 |
| 0.4201 | 56.38 | 8400 | 0.4693 | 0.7688 | 0.769 |
| 0.4243 | 57.72 | 8600 | 0.4693 | 0.7775 | 0.778 |
| 0.4226 | 59.06 | 8800 | 0.4673 | 0.7699 | 0.77 |
| 0.4219 | 60.4 | 9000 | 0.4665 | 0.7688 | 0.769 |
| 0.4213 | 61.74 | 9200 | 0.4676 | 0.7727 | 0.773 |
| 0.4234 | 63.09 | 9400 | 0.4664 | 0.7708 | 0.771 |
| 0.4237 | 64.43 | 9600 | 0.4666 | 0.7727 | 0.773 |
| 0.4181 | 65.77 | 9800 | 0.4672 | 0.7727 | 0.773 |
| 0.4216 | 67.11 | 10000 | 0.4668 | 0.7708 | 0.771 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_15M", "model-index": [{"name": "GUE_tf_4-seqsight_4096_512_15M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_tf_4-seqsight_4096_512_15M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_15M",
"region:us"
] | null | 2024-05-03T19:07:37+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us
| GUE\_tf\_4-seqsight\_4096\_512\_15M-L1\_f
=========================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_15M on the mahdibaghbanzadeh/GUE\_tf\_4 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3786
* F1 Score: 0.8268
* Accuracy: 0.828
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
43,
100,
5,
52
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_tf_4-seqsight_4096_512_15M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_15M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_15M) on the [mahdibaghbanzadeh/GUE_tf_4](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_4) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3453
- F1 Score: 0.8437
- Accuracy: 0.844
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5764 | 1.34 | 200 | 0.5276 | 0.7290 | 0.729 |
| 0.4864 | 2.68 | 400 | 0.5099 | 0.7339 | 0.735 |
| 0.4755 | 4.03 | 600 | 0.5020 | 0.7475 | 0.749 |
| 0.4666 | 5.37 | 800 | 0.5031 | 0.7510 | 0.751 |
| 0.4555 | 6.71 | 1000 | 0.4985 | 0.7576 | 0.759 |
| 0.4472 | 8.05 | 1200 | 0.4870 | 0.7650 | 0.765 |
| 0.4434 | 9.4 | 1400 | 0.4770 | 0.7660 | 0.766 |
| 0.4363 | 10.74 | 1600 | 0.4825 | 0.7683 | 0.769 |
| 0.4306 | 12.08 | 1800 | 0.4921 | 0.7653 | 0.767 |
| 0.4298 | 13.42 | 2000 | 0.4703 | 0.7706 | 0.771 |
| 0.4204 | 14.77 | 2200 | 0.4711 | 0.7780 | 0.778 |
| 0.4214 | 16.11 | 2400 | 0.4657 | 0.7794 | 0.78 |
| 0.4143 | 17.45 | 2600 | 0.4682 | 0.7690 | 0.769 |
| 0.4146 | 18.79 | 2800 | 0.4683 | 0.7789 | 0.779 |
| 0.4115 | 20.13 | 3000 | 0.4607 | 0.7830 | 0.783 |
| 0.4067 | 21.48 | 3200 | 0.4599 | 0.7754 | 0.776 |
| 0.4058 | 22.82 | 3400 | 0.4602 | 0.7845 | 0.785 |
| 0.4037 | 24.16 | 3600 | 0.4567 | 0.7869 | 0.787 |
| 0.4037 | 25.5 | 3800 | 0.4618 | 0.7751 | 0.777 |
| 0.3983 | 26.85 | 4000 | 0.4500 | 0.7869 | 0.787 |
| 0.3997 | 28.19 | 4200 | 0.4478 | 0.7908 | 0.791 |
| 0.3934 | 29.53 | 4400 | 0.4553 | 0.7870 | 0.787 |
| 0.3952 | 30.87 | 4600 | 0.4489 | 0.7950 | 0.795 |
| 0.3942 | 32.21 | 4800 | 0.4456 | 0.7858 | 0.786 |
| 0.3902 | 33.56 | 5000 | 0.4470 | 0.7899 | 0.79 |
| 0.3929 | 34.9 | 5200 | 0.4444 | 0.7869 | 0.787 |
| 0.3912 | 36.24 | 5400 | 0.4407 | 0.7938 | 0.794 |
| 0.385 | 37.58 | 5600 | 0.4421 | 0.7956 | 0.796 |
| 0.3848 | 38.93 | 5800 | 0.4436 | 0.7910 | 0.791 |
| 0.385 | 40.27 | 6000 | 0.4459 | 0.7946 | 0.795 |
| 0.3824 | 41.61 | 6200 | 0.4448 | 0.7919 | 0.792 |
| 0.3834 | 42.95 | 6400 | 0.4399 | 0.7979 | 0.798 |
| 0.3866 | 44.3 | 6600 | 0.4380 | 0.7955 | 0.796 |
| 0.3756 | 45.64 | 6800 | 0.4389 | 0.796 | 0.796 |
| 0.383 | 46.98 | 7000 | 0.4373 | 0.7950 | 0.795 |
| 0.3819 | 48.32 | 7200 | 0.4338 | 0.7978 | 0.798 |
| 0.3741 | 49.66 | 7400 | 0.4343 | 0.7980 | 0.798 |
| 0.3766 | 51.01 | 7600 | 0.4344 | 0.7979 | 0.798 |
| 0.3767 | 52.35 | 7800 | 0.4364 | 0.7966 | 0.797 |
| 0.371 | 53.69 | 8000 | 0.4383 | 0.8040 | 0.804 |
| 0.3771 | 55.03 | 8200 | 0.4385 | 0.7970 | 0.797 |
| 0.3722 | 56.38 | 8400 | 0.4358 | 0.7999 | 0.8 |
| 0.3745 | 57.72 | 8600 | 0.4348 | 0.8036 | 0.804 |
| 0.3711 | 59.06 | 8800 | 0.4338 | 0.7979 | 0.798 |
| 0.3688 | 60.4 | 9000 | 0.4342 | 0.8019 | 0.802 |
| 0.3685 | 61.74 | 9200 | 0.4342 | 0.8037 | 0.804 |
| 0.3723 | 63.09 | 9400 | 0.4327 | 0.8027 | 0.803 |
| 0.3719 | 64.43 | 9600 | 0.4326 | 0.8037 | 0.804 |
| 0.3661 | 65.77 | 9800 | 0.4331 | 0.8018 | 0.802 |
| 0.3721 | 67.11 | 10000 | 0.4325 | 0.8018 | 0.802 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_15M", "model-index": [{"name": "GUE_tf_4-seqsight_4096_512_15M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_tf_4-seqsight_4096_512_15M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_15M",
"region:us"
] | null | 2024-05-03T19:08:28+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us
| GUE\_tf\_4-seqsight\_4096\_512\_15M-L8\_f
=========================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_15M on the mahdibaghbanzadeh/GUE\_tf\_4 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3453
* F1 Score: 0.8437
* Accuracy: 0.844
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
43,
100,
5,
52
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | null |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | Nitish/vit-base-patch16-224-in21k-lora-fine-tuned-chest-xray | null | [
"region:us"
] | null | 2024-05-03T19:08:59+00:00 | [] | [] | TAGS
#region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
5,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |