pipeline_tag
stringclasses 48
values | library_name
stringclasses 198
values | text
stringlengths 1
900k
| metadata
stringlengths 2
438k
| id
stringlengths 5
122
| last_modified
null | tags
sequencelengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
| arxiv
sequencelengths 0
201
| languages
sequencelengths 0
1.83k
| tags_str
stringlengths 17
9.34k
| text_str
stringlengths 0
389k
| text_lists
sequencelengths 0
722
| processed_texts
sequencelengths 1
723
| tokens_length
sequencelengths 1
723
| input_texts
sequencelengths 1
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | julep-ai/dolphin-2.9-llama3-70b-awq | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-05-03T14:01:49+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
48,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | cilantro9246/k069igm | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-03T14:02:03+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
47,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |
text-generation | transformers |
# Mistral-7b-Instruct-v0.1-int8-ov
* Model creator: [Mistral AI](https://huggingface.co/mistralai)
* Original model: [Mistral-7b-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1)
## Description
This is [Mistral-7b-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) model converted to the [OpenVINOâ„¢ IR](https://docs.openvino.ai/2024/documentation/openvino-ir-format.html) (Intermediate Representation) format with weights compressed to INT8 by [NNCF](https://github.com/openvinotoolkit/nncf).
## Quantization Parameters
Weight compression was performed using `nncf.compress_weights` with the following parameters:
* mode: **INT8_ASYM**
For more information on quantization, check the [OpenVINO model optimization guide](https://docs.openvino.ai/2024/openvino-workflow/model-optimization-guide/weight-compression.html)
## Compatibility
The provided OpenVINOâ„¢ IR model is compatible with:
* OpenVINO version 2024.1.0 and higher
* Optimum Intel 1.16.0 and higher
## Running Model Inference
1. Install packages required for using [Optimum Intel](https://huggingface.co/docs/optimum/intel/index) integration with the OpenVINO backend:
```
pip install optimum[openvino]
```
2. Run model inference:
```
from transformers import AutoTokenizer
from optimum.intel.openvino import OVModelForCausalLM
model_id = "OpenVINO/mistral-7b-instrcut-v0.1-int8-ov"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = OVModelForCausalLM.from_pretrained(model_id)
messages = [
{"role": "user", "content": "What is your favourite condiment?"},
{"role": "assistant", "content": "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!"},
{"role": "user", "content": "Do you have mayonnaise recipes?"}
]
inputs = tokenizer.apply_chat_template(messages, return_tensors="pt")
outputs = model.generate(inputs, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
For more examples and possible optimizations, refer to the [OpenVINO Large Language Model Inference Guide](https://docs.openvino.ai/2024/learn-openvino/llm_inference_guide.html).
## Limitations
Check the original model card for [limitations](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1#limitations).
## Legal information
The original model is distributed under [Apache 2.0](https://choosealicense.com/licenses/apache-2.0/) license. More details can be found in [original model card](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1). | {"language": ["en"], "license": "apache-2.0"} | OpenVINO/mistral-7b-instrcut-v0.1-int4-ov | null | [
"transformers",
"openvino",
"mistral",
"text-generation",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-03T14:02:05+00:00 | [] | [
"en"
] | TAGS
#transformers #openvino #mistral #text-generation #conversational #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Mistral-7b-Instruct-v0.1-int8-ov
* Model creator: Mistral AI
* Original model: Mistral-7b-Instruct-v0.1
## Description
This is Mistral-7b-Instruct-v0.1 model converted to the OpenVINOâ„¢ IR (Intermediate Representation) format with weights compressed to INT8 by NNCF.
## Quantization Parameters
Weight compression was performed using 'nncf.compress_weights' with the following parameters:
* mode: INT8_ASYM
For more information on quantization, check the OpenVINO model optimization guide
## Compatibility
The provided OpenVINOâ„¢ IR model is compatible with:
* OpenVINO version 2024.1.0 and higher
* Optimum Intel 1.16.0 and higher
## Running Model Inference
1. Install packages required for using Optimum Intel integration with the OpenVINO backend:
2. Run model inference:
For more examples and possible optimizations, refer to the OpenVINO Large Language Model Inference Guide.
## Limitations
Check the original model card for limitations.
## Legal information
The original model is distributed under Apache 2.0 license. More details can be found in original model card. | [
"# Mistral-7b-Instruct-v0.1-int8-ov\n\n * Model creator: Mistral AI\n * Original model: Mistral-7b-Instruct-v0.1",
"## Description\n\nThis is Mistral-7b-Instruct-v0.1 model converted to the OpenVINOâ„¢ IR (Intermediate Representation) format with weights compressed to INT8 by NNCF.",
"## Quantization Parameters\n\nWeight compression was performed using 'nncf.compress_weights' with the following parameters:\n\n* mode: INT8_ASYM\n\nFor more information on quantization, check the OpenVINO model optimization guide",
"## Compatibility\n\nThe provided OpenVINOâ„¢ IR model is compatible with:\n\n* OpenVINO version 2024.1.0 and higher\n* Optimum Intel 1.16.0 and higher",
"## Running Model Inference\n\n1. Install packages required for using Optimum Intel integration with the OpenVINO backend:\n\n\n\n2. Run model inference:\n\n\n\nFor more examples and possible optimizations, refer to the OpenVINO Large Language Model Inference Guide.",
"## Limitations\n\nCheck the original model card for limitations.",
"## Legal information\n\nThe original model is distributed under Apache 2.0 license. More details can be found in original model card."
] | [
"TAGS\n#transformers #openvino #mistral #text-generation #conversational #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Mistral-7b-Instruct-v0.1-int8-ov\n\n * Model creator: Mistral AI\n * Original model: Mistral-7b-Instruct-v0.1",
"## Description\n\nThis is Mistral-7b-Instruct-v0.1 model converted to the OpenVINOâ„¢ IR (Intermediate Representation) format with weights compressed to INT8 by NNCF.",
"## Quantization Parameters\n\nWeight compression was performed using 'nncf.compress_weights' with the following parameters:\n\n* mode: INT8_ASYM\n\nFor more information on quantization, check the OpenVINO model optimization guide",
"## Compatibility\n\nThe provided OpenVINOâ„¢ IR model is compatible with:\n\n* OpenVINO version 2024.1.0 and higher\n* Optimum Intel 1.16.0 and higher",
"## Running Model Inference\n\n1. Install packages required for using Optimum Intel integration with the OpenVINO backend:\n\n\n\n2. Run model inference:\n\n\n\nFor more examples and possible optimizations, refer to the OpenVINO Large Language Model Inference Guide.",
"## Limitations\n\nCheck the original model card for limitations.",
"## Legal information\n\nThe original model is distributed under Apache 2.0 license. More details can be found in original model card."
] | [
46,
44,
42,
49,
37,
48,
11,
26
] | [
"TAGS\n#transformers #openvino #mistral #text-generation #conversational #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Mistral-7b-Instruct-v0.1-int8-ov\n\n * Model creator: Mistral AI\n * Original model: Mistral-7b-Instruct-v0.1## Description\n\nThis is Mistral-7b-Instruct-v0.1 model converted to the OpenVINOâ„¢ IR (Intermediate Representation) format with weights compressed to INT8 by NNCF.## Quantization Parameters\n\nWeight compression was performed using 'nncf.compress_weights' with the following parameters:\n\n* mode: INT8_ASYM\n\nFor more information on quantization, check the OpenVINO model optimization guide## Compatibility\n\nThe provided OpenVINOâ„¢ IR model is compatible with:\n\n* OpenVINO version 2024.1.0 and higher\n* Optimum Intel 1.16.0 and higher## Running Model Inference\n\n1. Install packages required for using Optimum Intel integration with the OpenVINO backend:\n\n\n\n2. Run model inference:\n\n\n\nFor more examples and possible optimizations, refer to the OpenVINO Large Language Model Inference Guide.## Limitations\n\nCheck the original model card for limitations.## Legal information\n\nThe original model is distributed under Apache 2.0 license. More details can be found in original model card."
] |
null | null | Peptit complex Krem nedir?
Peptit complex Fiyat, cildin görünümünü canlandırmak ve yenilemek için tasarlanmış son teknoloji ürünü yaşlanma karşıtı bir serumdur. Güçlü formülü, kollajen üretimini uyarma, cilt elastikiyetini artırma ve yaşlanma belirtilerini azaltma yetenekleriyle bilinen amino asit bileşikleri olan peptidlerin gücünden yararlanır. Düzenli kullanımla Peptit complex Yorumlar, cilde gençlik ışıltısını geri kazandırmayı ve genel cilt sağlığını geliştirmeyi amaçlar.
Resmi internet sitesi:<a href="https://www.nutritionsee.com/peptompurke">www.Peptitcomplex.com</a>
<p><a href="https://www.nutritionsee.com/peptompurke"> <img src="https://www.nutritionsee.com/wp-content/uploads/2024/05/Peptit-complex-turkey-1.png" alt="enter image description here"> </a></p>
<a href="https://www.nutritionsee.com/peptompurke">Şimdi al!! Daha fazla bilgi için aşağıdaki bağlantıya tıklayın ve hemen %50 indirimden yararlanın... Acele edin</a>
Resmi internet sitesi:<a href="https://www.nutritionsee.com/peptompurke">www.Peptitcomplex.com</a> | {"license": "apache-2.0"} | Peptitcomplex/Peptitcomplex | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-05-03T14:03:07+00:00 | [] | [] | TAGS
#license-apache-2.0 #region-us
| Peptit complex Krem nedir?
Peptit complex Fiyat, cildin görünümünü canlandırmak ve yenilemek için tasarlanmış son teknoloji ürünü yaşlanma karşıtı bir serumdur. Güçlü formülü, kollajen üretimini uyarma, cilt elastikiyetini artırma ve yaşlanma belirtilerini azaltma yetenekleriyle bilinen amino asit bileşikleri olan peptidlerin gücünden yararlanır. Düzenli kullanımla Peptit complex Yorumlar, cilde gençlik ışıltısını geri kazandırmayı ve genel cilt sağlığını geliştirmeyi amaçlar.
Resmi internet sitesi:<a href="URL
<p><a href="URL <img src="URL alt="enter image description here"> </a></p>
<a href="URL>Şimdi al!! Daha fazla bilgi için aşağıdaki bağlantıya tıklayın ve hemen %50 indirimden yararlanın... Acele edin</a>
Resmi internet sitesi:<a href="URL | [] | [
"TAGS\n#license-apache-2.0 #region-us \n"
] | [
13
] | [
"TAGS\n#license-apache-2.0 #region-us \n"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | golf2248/8aj1lky | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-03T14:04:33+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
47,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |
text-classification | transformers |
# HistoroBERTa-SuicideIncidentClassifier
A binary classifier based on the RoBERTa-base architecture, fine-tuned on [historical British newspaper articles](https://huggingface.co/datasets/npedrazzini/hist_suicide_incident) to discern whether news reports discuss (confirmed or speculated) suicide cases, investigations, or court cases related to suicides. It attempts to differentiate between texts where _suicide_(_s_); or _suicidal_ is used in the context of actual incidents and those where these terms appear figuratively or in broader, non-specific discussions (e.g., mention of the number of suicides in the context of vital statistics; philosophical discussions around the morality of suicide at an abstract level; etc.).
# Overview
- **Model Name:** HistoroBERTa-SuicideIncidentClassifier
- **Task**: Binary Classification
- **Labels**: ['Incident', 'Not Incident']
- **Base Model:** [RoBERTa (A Robustly Optimized BERT Pretraining Approach) base model](https://huggingface.co/FacebookAI/roberta-base)
- **Language:** 19th-century English (1780-1920)
- **Developed by:** [Nilo Pedrazzini](https://huggingface.co/npedrazzini), [Daniel CS Wilson](https://huggingface.co/dcsw2)
# Input Format
A `str`-type input.
# Output Format
The predicted label (`Incident` or `Not Incident`), with the confidence score for each labels.
# Examples
### Example 1:
**Input:**
```
On Wednesday evening an inquest was held at the Stag and Pheasant before Major Taylor, coroner, and a jury, of whom Mr. Joel Casson was foreman, on the body of John William Birks, grocer, of 23, Huddersfield Road, who cut his throat on Tuesday evening.
```
**Output:**
```
{
'Incident': 0.974,
'Not Incident': 0.026
}
```
### Example 2:
**Input:**
```
The death-rate by accidents among colliers is, at least, from six to seven times as great as the death-rate from violence among the whole population, including suicides homicides, and the dangerous occupations.
```
**Output:**
```
{
'Not Incident': 0.577,
'Incident': 0.423
}
```
# Uses
The classifier can be used, for instance, to obtain larger datasets reporting on cases of suicide in historical digitized newspapers, to then carry out larger-scale analyses on the language used in the reports.
# Bias, Risks, and Limitations
The classifier was trained on digitized newspaper data containing many OCR errors and, while text segmentation was meant to capture individual news articles, each labeled item in the training dataset very often spans multiple articles. This will necessarily have introduced bias in the model because of the extra content unrelated to reporting on suicide.
⚠ **NB**: We did not carry out a systematic evaluation of the effect of bad news article segmentation on the quality of the classifier.
# Training Details
This model was released upon comparison with other runs, and its selection was based on its accuracy on the evaluation set.
Models based on RoBERTa were also compared to those based on [bert_1760_1900](https://huggingface.co/Livingwithmachines/bert_1760_1900), which achieved a slightly lower performance despite hyperparameter tuning.
In the following report, the model in this repository corresponds to the one labeled `roberta-7`, specifically the output of epoch 4, which returned the highest accuracy (>0.96).
<img src="https://cdn-uploads.huggingface.co/production/uploads/6342a31d5b97f509388807f3/KXqMD4Pchpmkee5CMFFYb.png" style="width: 90%;" />
## Training Data
https://huggingface.co/datasets/npedrazzini/hist_suicide_incident
# Model Card Authors
Nilo Pedrazzini
# Model Card Contact
npedrazzini@turing.ac.uk
# How to use the model
Use the code below to get started with the model.
Import and load the model:
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model_name = "npedrazzini/HistoroBERTa-SuicideIncidentClassifier"
model = AutoModelForSequenceClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
Generate prediction:
```python
input_text = "The death-rate by accidents among colliers is, at least, from six to seven times as great as the death-rate from violence among the whole population, including suicides homicides, and the dangerous occupations.."
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
probabilities = logits.softmax(dim=-1)
```
Print predicted label:
```python
predicted_label_id = probabilities.argmax().item()
predicted_label = model.config.id2label[predicted_label_id]
print(predicted_label)
```
Output:
```
NotIncident
```
Print probability of each label:
```python
label_probabilities = {label: prob for label, prob in zip(model.config.id2label.values(), probabilities.squeeze().tolist())}
label_probabilities_sorted = dict(sorted(label_probabilities.items(), key=lambda item: item[1], reverse=True))
print(label_probabilities_sorted)
```
Output:
```
{'NotIncident': 0.5880260467529297, 'Incident': 0.4119739532470703}
``` | {"language": ["en"], "license": "mit", "tags": ["roberta-based", "historical newspaper", "late modern english", "text classification", "not-for-all-audiences"], "datasets": ["npedrazzini/hist_suicide_incident"], "metrics": ["accuracy"], "pipeline_tag": "text-classification", "widget": [{"text": "On Wednesday evening an inquest was held at the Stag and Pheasant before Major Taylor, coroner, and a jury, of whom Mr. Joel Casson was foreman, on the body of John William Birks, grocer, of 23, Huddersfield Road, who cut his throat on Tuesday evening.", "example_title": "Example 1"}, {"text": "The death-rate by accidents among colliers is, at least, from six to seven times as great as the death-rate from violence among the whole population, including suicides homicides, and the dangerous occupations.", "example_title": "Example 2"}]} | npedrazzini/HistoroBERTa-SuicideIncidentClassifier | null | [
"transformers",
"pytorch",
"tf",
"roberta",
"text-classification",
"roberta-based",
"historical newspaper",
"late modern english",
"text classification",
"not-for-all-audiences",
"en",
"dataset:npedrazzini/hist_suicide_incident",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T14:04:57+00:00 | [] | [
"en"
] | TAGS
#transformers #pytorch #tf #roberta #text-classification #roberta-based #historical newspaper #late modern english #text classification #not-for-all-audiences #en #dataset-npedrazzini/hist_suicide_incident #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
# HistoroBERTa-SuicideIncidentClassifier
A binary classifier based on the RoBERTa-base architecture, fine-tuned on historical British newspaper articles to discern whether news reports discuss (confirmed or speculated) suicide cases, investigations, or court cases related to suicides. It attempts to differentiate between texts where _suicide_(_s_); or _suicidal_ is used in the context of actual incidents and those where these terms appear figuratively or in broader, non-specific discussions (e.g., mention of the number of suicides in the context of vital statistics; philosophical discussions around the morality of suicide at an abstract level; etc.).
# Overview
- Model Name: HistoroBERTa-SuicideIncidentClassifier
- Task: Binary Classification
- Labels: ['Incident', 'Not Incident']
- Base Model: RoBERTa (A Robustly Optimized BERT Pretraining Approach) base model
- Language: 19th-century English (1780-1920)
- Developed by: Nilo Pedrazzini, Daniel CS Wilson
# Input Format
A 'str'-type input.
# Output Format
The predicted label ('Incident' or 'Not Incident'), with the confidence score for each labels.
# Examples
### Example 1:
Input:
Output:
### Example 2:
Input:
Output:
# Uses
The classifier can be used, for instance, to obtain larger datasets reporting on cases of suicide in historical digitized newspapers, to then carry out larger-scale analyses on the language used in the reports.
# Bias, Risks, and Limitations
The classifier was trained on digitized newspaper data containing many OCR errors and, while text segmentation was meant to capture individual news articles, each labeled item in the training dataset very often spans multiple articles. This will necessarily have introduced bias in the model because of the extra content unrelated to reporting on suicide.
⚠ NB: We did not carry out a systematic evaluation of the effect of bad news article segmentation on the quality of the classifier.
# Training Details
This model was released upon comparison with other runs, and its selection was based on its accuracy on the evaluation set.
Models based on RoBERTa were also compared to those based on bert_1760_1900, which achieved a slightly lower performance despite hyperparameter tuning.
In the following report, the model in this repository corresponds to the one labeled 'roberta-7', specifically the output of epoch 4, which returned the highest accuracy (>0.96).
<img src="URL style="width: 90%;" />
## Training Data
URL
# Model Card Authors
Nilo Pedrazzini
# Model Card Contact
npedrazzini@URL
# How to use the model
Use the code below to get started with the model.
Import and load the model:
Generate prediction:
Print predicted label:
Output:
Print probability of each label:
Output:
| [
"# HistoroBERTa-SuicideIncidentClassifier\n\nA binary classifier based on the RoBERTa-base architecture, fine-tuned on historical British newspaper articles to discern whether news reports discuss (confirmed or speculated) suicide cases, investigations, or court cases related to suicides. It attempts to differentiate between texts where _suicide_(_s_); or _suicidal_ is used in the context of actual incidents and those where these terms appear figuratively or in broader, non-specific discussions (e.g., mention of the number of suicides in the context of vital statistics; philosophical discussions around the morality of suicide at an abstract level; etc.).",
"# Overview\n- Model Name: HistoroBERTa-SuicideIncidentClassifier\n- Task: Binary Classification \n- Labels: ['Incident', 'Not Incident']\n- Base Model: RoBERTa (A Robustly Optimized BERT Pretraining Approach) base model\n- Language: 19th-century English (1780-1920)\n- Developed by: Nilo Pedrazzini, Daniel CS Wilson",
"# Input Format\nA 'str'-type input.",
"# Output Format\nThe predicted label ('Incident' or 'Not Incident'), with the confidence score for each labels.",
"# Examples",
"### Example 1:\n\nInput:\n\n\nOutput:",
"### Example 2:\n\nInput:\n\n\nOutput:",
"# Uses\nThe classifier can be used, for instance, to obtain larger datasets reporting on cases of suicide in historical digitized newspapers, to then carry out larger-scale analyses on the language used in the reports.",
"# Bias, Risks, and Limitations\n\nThe classifier was trained on digitized newspaper data containing many OCR errors and, while text segmentation was meant to capture individual news articles, each labeled item in the training dataset very often spans multiple articles. This will necessarily have introduced bias in the model because of the extra content unrelated to reporting on suicide. \n\n⚠ NB: We did not carry out a systematic evaluation of the effect of bad news article segmentation on the quality of the classifier.",
"# Training Details\n\nThis model was released upon comparison with other runs, and its selection was based on its accuracy on the evaluation set. \nModels based on RoBERTa were also compared to those based on bert_1760_1900, which achieved a slightly lower performance despite hyperparameter tuning.\n\nIn the following report, the model in this repository corresponds to the one labeled 'roberta-7', specifically the output of epoch 4, which returned the highest accuracy (>0.96).\n\n<img src=\"URL style=\"width: 90%;\" />",
"## Training Data\n\nURL",
"# Model Card Authors\n\nNilo Pedrazzini",
"# Model Card Contact\n\nnpedrazzini@URL",
"# How to use the model\n\nUse the code below to get started with the model.\n\nImport and load the model:\n\n\n\nGenerate prediction:\n\n\n\nPrint predicted label:\n\n\n\nOutput:\n\n\n\nPrint probability of each label:\n\n\n\nOutput:"
] | [
"TAGS\n#transformers #pytorch #tf #roberta #text-classification #roberta-based #historical newspaper #late modern english #text classification #not-for-all-audiences #en #dataset-npedrazzini/hist_suicide_incident #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"# HistoroBERTa-SuicideIncidentClassifier\n\nA binary classifier based on the RoBERTa-base architecture, fine-tuned on historical British newspaper articles to discern whether news reports discuss (confirmed or speculated) suicide cases, investigations, or court cases related to suicides. It attempts to differentiate between texts where _suicide_(_s_); or _suicidal_ is used in the context of actual incidents and those where these terms appear figuratively or in broader, non-specific discussions (e.g., mention of the number of suicides in the context of vital statistics; philosophical discussions around the morality of suicide at an abstract level; etc.).",
"# Overview\n- Model Name: HistoroBERTa-SuicideIncidentClassifier\n- Task: Binary Classification \n- Labels: ['Incident', 'Not Incident']\n- Base Model: RoBERTa (A Robustly Optimized BERT Pretraining Approach) base model\n- Language: 19th-century English (1780-1920)\n- Developed by: Nilo Pedrazzini, Daniel CS Wilson",
"# Input Format\nA 'str'-type input.",
"# Output Format\nThe predicted label ('Incident' or 'Not Incident'), with the confidence score for each labels.",
"# Examples",
"### Example 1:\n\nInput:\n\n\nOutput:",
"### Example 2:\n\nInput:\n\n\nOutput:",
"# Uses\nThe classifier can be used, for instance, to obtain larger datasets reporting on cases of suicide in historical digitized newspapers, to then carry out larger-scale analyses on the language used in the reports.",
"# Bias, Risks, and Limitations\n\nThe classifier was trained on digitized newspaper data containing many OCR errors and, while text segmentation was meant to capture individual news articles, each labeled item in the training dataset very often spans multiple articles. This will necessarily have introduced bias in the model because of the extra content unrelated to reporting on suicide. \n\n⚠ NB: We did not carry out a systematic evaluation of the effect of bad news article segmentation on the quality of the classifier.",
"# Training Details\n\nThis model was released upon comparison with other runs, and its selection was based on its accuracy on the evaluation set. \nModels based on RoBERTa were also compared to those based on bert_1760_1900, which achieved a slightly lower performance despite hyperparameter tuning.\n\nIn the following report, the model in this repository corresponds to the one labeled 'roberta-7', specifically the output of epoch 4, which returned the highest accuracy (>0.96).\n\n<img src=\"URL style=\"width: 90%;\" />",
"## Training Data\n\nURL",
"# Model Card Authors\n\nNilo Pedrazzini",
"# Model Card Contact\n\nnpedrazzini@URL",
"# How to use the model\n\nUse the code below to get started with the model.\n\nImport and load the model:\n\n\n\nGenerate prediction:\n\n\n\nPrint predicted label:\n\n\n\nOutput:\n\n\n\nPrint probability of each label:\n\n\n\nOutput:"
] | [
75,
137,
81,
12,
25,
2,
10,
10,
45,
104,
115,
6,
10,
12,
40
] | [
"TAGS\n#transformers #pytorch #tf #roberta #text-classification #roberta-based #historical newspaper #late modern english #text classification #not-for-all-audiences #en #dataset-npedrazzini/hist_suicide_incident #license-mit #autotrain_compatible #endpoints_compatible #region-us \n# HistoroBERTa-SuicideIncidentClassifier\n\nA binary classifier based on the RoBERTa-base architecture, fine-tuned on historical British newspaper articles to discern whether news reports discuss (confirmed or speculated) suicide cases, investigations, or court cases related to suicides. It attempts to differentiate between texts where _suicide_(_s_); or _suicidal_ is used in the context of actual incidents and those where these terms appear figuratively or in broader, non-specific discussions (e.g., mention of the number of suicides in the context of vital statistics; philosophical discussions around the morality of suicide at an abstract level; etc.).# Overview\n- Model Name: HistoroBERTa-SuicideIncidentClassifier\n- Task: Binary Classification \n- Labels: ['Incident', 'Not Incident']\n- Base Model: RoBERTa (A Robustly Optimized BERT Pretraining Approach) base model\n- Language: 19th-century English (1780-1920)\n- Developed by: Nilo Pedrazzini, Daniel CS Wilson# Input Format\nA 'str'-type input.# Output Format\nThe predicted label ('Incident' or 'Not Incident'), with the confidence score for each labels.# Examples### Example 1:\n\nInput:\n\n\nOutput:### Example 2:\n\nInput:\n\n\nOutput:# Uses\nThe classifier can be used, for instance, to obtain larger datasets reporting on cases of suicide in historical digitized newspapers, to then carry out larger-scale analyses on the language used in the reports.# Bias, Risks, and Limitations\n\nThe classifier was trained on digitized newspaper data containing many OCR errors and, while text segmentation was meant to capture individual news articles, each labeled item in the training dataset very often spans multiple articles. This will necessarily have introduced bias in the model because of the extra content unrelated to reporting on suicide. \n\n⚠ NB: We did not carry out a systematic evaluation of the effect of bad news article segmentation on the quality of the classifier.# Training Details\n\nThis model was released upon comparison with other runs, and its selection was based on its accuracy on the evaluation set. \nModels based on RoBERTa were also compared to those based on bert_1760_1900, which achieved a slightly lower performance despite hyperparameter tuning.\n\nIn the following report, the model in this repository corresponds to the one labeled 'roberta-7', specifically the output of epoch 4, which returned the highest accuracy (>0.96).\n\n<img src=\"URL style=\"width: 90%;\" />## Training Data\n\nURL# Model Card Authors\n\nNilo Pedrazzini# Model Card Contact\n\nnpedrazzini@URL# How to use the model\n\nUse the code below to get started with the model.\n\nImport and load the model:\n\n\n\nGenerate prediction:\n\n\n\nPrint predicted label:\n\n\n\nOutput:\n\n\n\nPrint probability of each label:\n\n\n\nOutput:"
] |
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ef_t5_baseline_testbest_model
This model is a fine-tuned version of [google-t5/t5-base](https://huggingface.co/google-t5/t5-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004798427773610992
- train_batch_size: 4
- eval_batch_size: 4
- seed: 123
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.40.1
- Pytorch 2.1.2
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "google-t5/t5-base", "model-index": [{"name": "ef_t5_baseline_testbest_model", "results": []}]} | frayline/ef_t5_baseline_testbest_model | null | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-03T14:05:10+00:00 | [] | [] | TAGS
#transformers #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-google-t5/t5-base #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# ef_t5_baseline_testbest_model
This model is a fine-tuned version of google-t5/t5-base on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004798427773610992
- train_batch_size: 4
- eval_batch_size: 4
- seed: 123
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.40.1
- Pytorch 2.1.2
- Datasets 2.19.0
- Tokenizers 0.19.1
| [
"# ef_t5_baseline_testbest_model\n\nThis model is a fine-tuned version of google-t5/t5-base on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0004798427773610992\n- train_batch_size: 4\n- eval_batch_size: 4\n- seed: 123\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1\n- mixed_precision_training: Native AMP",
"### Framework versions\n\n- Transformers 4.40.1\n- Pytorch 2.1.2\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-google-t5/t5-base #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# ef_t5_baseline_testbest_model\n\nThis model is a fine-tuned version of google-t5/t5-base on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0004798427773610992\n- train_batch_size: 4\n- eval_batch_size: 4\n- seed: 123\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1\n- mixed_precision_training: Native AMP",
"### Framework versions\n\n- Transformers 4.40.1\n- Pytorch 2.1.2\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] | [
64,
38,
7,
9,
9,
4,
113,
40
] | [
"TAGS\n#transformers #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-google-t5/t5-base #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# ef_t5_baseline_testbest_model\n\nThis model is a fine-tuned version of google-t5/t5-base on an unknown dataset.## Model description\n\nMore information needed## Intended uses & limitations\n\nMore information needed## Training and evaluation data\n\nMore information needed## Training procedure### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0004798427773610992\n- train_batch_size: 4\n- eval_batch_size: 4\n- seed: 123\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1\n- mixed_precision_training: Native AMP### Framework versions\n\n- Transformers 4.40.1\n- Pytorch 2.1.2\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] |
feature-extraction | transformers |
# phospho-small
This is a SetFit model that can be used for Text Classification on CPU.
The model has been trained using an efficient few-shot learning technique.
## Usage
```python
from setfit import SetFitModel
model = SetFitModel.from_pretrained("phospho-small-74095b0")
outputs = model.predict(["This is a sentence to classify", "Another sentence"])
# tensor([1, 0])
```
## References
This work was possible thanks to the SetFit library and the work of:
Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren (2022). Efficient Few-Shot Learning Without Prompts.
ArXiv: [https://doi.org/10.48550/arxiv.2209.11055](https://doi.org/10.48550/arxiv.2209.11055)
| {"language": "en", "license": "apache-2.0"} | phospho-app/phospho-small-74095b0 | null | [
"transformers",
"safetensors",
"mpnet",
"feature-extraction",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T14:06:31+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #mpnet #feature-extraction #en #license-apache-2.0 #endpoints_compatible #region-us
|
# phospho-small
This is a SetFit model that can be used for Text Classification on CPU.
The model has been trained using an efficient few-shot learning technique.
## Usage
## References
This work was possible thanks to the SetFit library and the work of:
Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren (2022). Efficient Few-Shot Learning Without Prompts.
ArXiv: URL
| [
"# phospho-small\n\nThis is a SetFit model that can be used for Text Classification on CPU.\n\nThe model has been trained using an efficient few-shot learning technique.",
"## Usage",
"## References\n\nThis work was possible thanks to the SetFit library and the work of:\n\nTunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren (2022). Efficient Few-Shot Learning Without Prompts. \n\nArXiv: URL"
] | [
"TAGS\n#transformers #safetensors #mpnet #feature-extraction #en #license-apache-2.0 #endpoints_compatible #region-us \n",
"# phospho-small\n\nThis is a SetFit model that can be used for Text Classification on CPU.\n\nThe model has been trained using an efficient few-shot learning technique.",
"## Usage",
"## References\n\nThis work was possible thanks to the SetFit library and the work of:\n\nTunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren (2022). Efficient Few-Shot Learning Without Prompts. \n\nArXiv: URL"
] | [
33,
38,
3,
78
] | [
"TAGS\n#transformers #safetensors #mpnet #feature-extraction #en #license-apache-2.0 #endpoints_compatible #region-us \n# phospho-small\n\nThis is a SetFit model that can be used for Text Classification on CPU.\n\nThe model has been trained using an efficient few-shot learning technique.## Usage## References\n\nThis work was possible thanks to the SetFit library and the work of:\n\nTunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren (2022). Efficient Few-Shot Learning Without Prompts. \n\nArXiv: URL"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": ["unsloth"]} | OmnicromsBrain/Trainingtest7b | null | [
"transformers",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T14:06:56+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #unsloth #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #unsloth #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
26,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#transformers #unsloth #arxiv-1910.09700 #endpoints_compatible #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |
text-generation | transformers |
<img src="./llama-3-merges.webp" alt="Llama-3 DPO Logo" width="500" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
# MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.4
This model is a fine-tune (DPO) of `meta-llama/Meta-Llama-3-70B-Instruct` model.
# Quantized GGUF
All GGUF models are available here: [MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.4-GGUF](https://huggingface.co/MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.4-GGUF)
# Prompt Template
This model uses `ChatML` prompt template:
```
<|im_start|>system
{System}
<|im_end|>
<|im_start|>user
{User}
<|im_end|>
<|im_start|>assistant
{Assistant}
````
# How to use
You can use this model by using `MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.4` as the model name in Hugging Face's
transformers library.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
from transformers import pipeline
import torch
model_id = "MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.4"
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
trust_remote_code=True,
# attn_implementation="flash_attention_2"
)
tokenizer = AutoTokenizer.from_pretrained(
model_id,
trust_remote_code=True
)
streamer = TextStreamer(tokenizer)
pipeline = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
model_kwargs={"torch_dtype": torch.bfloat16},
streamer=streamer
)
# Then you can use the pipeline to generate text.
messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Who are you?"},
]
prompt = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
terminators = [
tokenizer.eos_token_id,
tokenizer.convert_tokens_to_ids("<|im_end|>"),
tokenizer.convert_tokens_to_ids("<|eot_id|>") # safer to have this too
]
outputs = pipeline(
prompt,
max_new_tokens=2048,
eos_token_id=terminators,
do_sample=True,
temperature=0.6,
top_p=0.95,
)
print(outputs[0]["generated_text"][len(prompt):])
```
## Sample Response
```
<|im_start|>system
You are a helpful assistant.<|im_end|>
<|im_start|>user
describe about pros and cons of docker system.<|im_end|>
<|im_start|>assistant
```
```
Here are the pros and cons of the Docker system:
**Pros:**
1. **Lightweight**: Docker containers are lightweight and use fewer resources compared to traditional virtual machines.
2. **Isolation**: Docker provides a high level of isolation between containers, ensuring that each application runs independently without interfering with others.
3. **Portability**: Docker containers are highly portable and can run on any system that supports Docker, without requiring specific dependencies or configurations.
4. **Efficient Use of Resources**: Docker containers can share the same kernel and use fewer resources, making them more efficient than traditional virtual machines.
5. **Fast Deployment**: Docker containers can be spun up quickly, making it ideal for rapid deployment and scaling of applications.
6. **Easy Maintenance**: Docker provides a simple way to manage and update applications, with features like rolling updates and rollbacks.
7. **Security**: Docker provides a secure environment for applications, with features like network isolation and encryption.
8. **Large Ecosystem**: Docker has a large and active ecosystem, with a wide range of tools and plugins available for various tasks.
9. **Version Control**: Docker provides an easy way to manage different versions of an application, with features like version tagging and rollbacks.
10. **Cost-Effective**: Docker can help reduce costs by minimizing the need for virtual machines and reducing the overhead of resource allocation.
**Cons:**
1. **Steep Learning Curve**: Docker has a complex architecture and requires a good understanding of containerization, networking, and storage.
2. **Overhead**: While Docker containers are lightweight, they still require some overhead in terms of resources and management.
3. **Security Risks**: If not properly configured, Docker containers can introduce security risks, such as unauthorized access to the host system.
4. **Limited Support for GUI Applications**: Docker is primarily designed for running command-line applications, and running GUI applications can be challenging.
5. **Networking Complexity**: Docker's networking model can be complex, especially in large-scale deployments.
6. **Storage Complexity**: Docker's storage model can be complex, especially when dealing with persistent data.
7. **Limited Support for Legacy Applications**: Docker is designed for modern, cloud-native applications, and may not be suitable for legacy applications that rely on older technologies.
8. **Dependence on Docker Hub**: Docker relies heavily on Docker Hub, which can be a single point of failure if the hub is down or experiencing issues.
9. **Orchestration Complexity**: Docker provides a built-in orchestration tool, Docker Swarm, but it can be complex to set up and manage.
10. **Vendor Lock-in**: Docker is a proprietary technology, and while it has a large ecosystem, it can be difficult to switch to alternative containerization platforms.
Overall, Docker provides a powerful and flexible way to deploy and manage applications, but it requires careful planning, configuration, and management to ensure optimal performance and security.
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_MaziyarPanahi__Llama-3-70B-Instruct-DPO-v0.4)
| Metric |Value|
|---------------------------------|----:|
|Avg. |78.89|
|AI2 Reasoning Challenge (25-Shot)|72.61|
|HellaSwag (10-Shot) |86.03|
|MMLU (5-Shot) |80.50|
|TruthfulQA (0-shot) |63.26|
|Winogrande (5-shot) |83.58|
|GSM8k (5-shot) |87.34|
| {"language": ["en"], "license": "llama3", "library_name": "transformers", "tags": ["axolotl", "finetune", "dpo", "facebook", "meta", "pytorch", "llama", "llama-3", "chatml"], "datasets": ["argilla/ultrafeedback-binarized-preferences"], "base_model": "meta-llama/Meta-Llama-3-70B-Instruct", "pipeline_tag": "text-generation", "license_name": "llama3", "license_link": "LICENSE", "inference": false, "model_creator": "MaziyarPanahi", "quantized_by": "MaziyarPanahi", "model-index": [{"name": "Llama-3-70B-Instruct-DPO-v0.4", "results": [{"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "AI2 Reasoning Challenge (25-Shot)", "type": "ai2_arc", "config": "ARC-Challenge", "split": "test", "args": {"num_few_shot": 25}}, "metrics": [{"type": "acc_norm", "value": 72.61, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.4", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "HellaSwag (10-Shot)", "type": "hellaswag", "split": "validation", "args": {"num_few_shot": 10}}, "metrics": [{"type": "acc_norm", "value": 86.03, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.4", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "MMLU (5-Shot)", "type": "cais/mmlu", "config": "all", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 80.5, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.4", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "TruthfulQA (0-shot)", "type": "truthful_qa", "config": "multiple_choice", "split": "validation", "args": {"num_few_shot": 0}}, "metrics": [{"type": "mc2", "value": 63.26}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.4", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "Winogrande (5-shot)", "type": "winogrande", "config": "winogrande_xl", "split": "validation", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 83.58, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.4", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "GSM8k (5-shot)", "type": "gsm8k", "config": "main", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 87.34, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.4", "name": "Open LLM Leaderboard"}}]}]} | blockblockblock/Llama-3-70B-Instruct-DPO-v0.4-bpw2.25-exl2 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"axolotl",
"finetune",
"dpo",
"facebook",
"meta",
"pytorch",
"llama-3",
"chatml",
"conversational",
"en",
"dataset:argilla/ultrafeedback-binarized-preferences",
"base_model:meta-llama/Meta-Llama-3-70B-Instruct",
"license:llama3",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-03T14:08:04+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #llama #text-generation #axolotl #finetune #dpo #facebook #meta #pytorch #llama-3 #chatml #conversational #en #dataset-argilla/ultrafeedback-binarized-preferences #base_model-meta-llama/Meta-Llama-3-70B-Instruct #license-llama3 #model-index #autotrain_compatible #text-generation-inference #region-us
| ![Llama-3 DPO Logo](./URL)
MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.4
===========================================
This model is a fine-tune (DPO) of 'meta-llama/Meta-Llama-3-70B-Instruct' model.
Quantized GGUF
==============
All GGUF models are available here: MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.4-GGUF
Prompt Template
===============
This model uses 'ChatML' prompt template:
'
How to use
==========
You can use this model by using 'MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.4' as the model name in Hugging Face's
transformers library.
Sample Response
---------------
Open LLM Leaderboard Evaluation Results
=======================================
Detailed results can be found here
| [] | [
"TAGS\n#transformers #safetensors #llama #text-generation #axolotl #finetune #dpo #facebook #meta #pytorch #llama-3 #chatml #conversational #en #dataset-argilla/ultrafeedback-binarized-preferences #base_model-meta-llama/Meta-Llama-3-70B-Instruct #license-llama3 #model-index #autotrain_compatible #text-generation-inference #region-us \n"
] | [
112
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #axolotl #finetune #dpo #facebook #meta #pytorch #llama-3 #chatml #conversational #en #dataset-argilla/ultrafeedback-binarized-preferences #base_model-meta-llama/Meta-Llama-3-70B-Instruct #license-llama3 #model-index #autotrain_compatible #text-generation-inference #region-us \n"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | OwOpeepeepoopoo/herewegoagain16 | null | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T14:09:17+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
41,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | golf2248/07n1ce3 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-03T14:09:37+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
47,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | geniacllm/Mixtral-dMoE-8x2B | null | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-03T14:10:46+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #mixtral #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #mixtral #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
44,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#transformers #safetensors #mixtral #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |
feature-extraction | sentence-transformers | This model is a fine-tuned version of [**jinaai/jina-embeddings-v2-base-en**](https://huggingface.co/jinaai/jina-embeddings-v2-base-en) designed for the following use case:
This model is designed to support various applications in natural language processing and understanding.
## How to Use
This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:
```python
from transformers import AutoModel, AutoTokenizer
llm_name = "jina-embeddings-v2-base-en-03052024-21on-webapp"
tokenizer = AutoTokenizer.from_pretrained(llm_name)
model = AutoModel.from_pretrained(llm_name, trust_remote_code=True)
tokens = tokenizer("Your text here", return_tensors="pt")
embedding = model(**tokens)
```
| {"language": ["en"], "license": "apache-2.0", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "mteb", "Events", "Meetups", "Networking", "Community", "Social"], "datasets": ["fine-tuned/jina-embeddings-v2-base-en-03052024-21on-webapp", "allenai/c4"], "pipeline_tag": "feature-extraction"} | fine-tuned/jina-embeddings-v2-base-en-03052024-21on-webapp | null | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"mteb",
"Events",
"Meetups",
"Networking",
"Community",
"Social",
"custom_code",
"en",
"dataset:fine-tuned/jina-embeddings-v2-base-en-03052024-21on-webapp",
"dataset:allenai/c4",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T14:10:59+00:00 | [] | [
"en"
] | TAGS
#sentence-transformers #safetensors #bert #feature-extraction #sentence-similarity #mteb #Events #Meetups #Networking #Community #Social #custom_code #en #dataset-fine-tuned/jina-embeddings-v2-base-en-03052024-21on-webapp #dataset-allenai/c4 #license-apache-2.0 #endpoints_compatible #region-us
| This model is a fine-tuned version of jinaai/jina-embeddings-v2-base-en designed for the following use case:
This model is designed to support various applications in natural language processing and understanding.
## How to Use
This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:
| [
"## How to Use\nThis model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:"
] | [
"TAGS\n#sentence-transformers #safetensors #bert #feature-extraction #sentence-similarity #mteb #Events #Meetups #Networking #Community #Social #custom_code #en #dataset-fine-tuned/jina-embeddings-v2-base-en-03052024-21on-webapp #dataset-allenai/c4 #license-apache-2.0 #endpoints_compatible #region-us \n",
"## How to Use\nThis model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:"
] | [
99,
43
] | [
"TAGS\n#sentence-transformers #safetensors #bert #feature-extraction #sentence-similarity #mteb #Events #Meetups #Networking #Community #Social #custom_code #en #dataset-fine-tuned/jina-embeddings-v2-base-en-03052024-21on-webapp #dataset-allenai/c4 #license-apache-2.0 #endpoints_compatible #region-us \n## How to Use\nThis model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# shawgpt-ft
This model is a fine-tuned version of [TheBloke/Mistral-7B-Instruct-v0.2-GPTQ](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.2-GPTQ) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7929
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.921 | 0.9892 | 69 | 3.1268 |
| 2.7734 | 1.9928 | 139 | 2.7793 |
| 2.5538 | 2.9964 | 209 | 2.7026 |
| 2.4648 | 4.0 | 279 | 2.7008 |
| 2.4164 | 4.9892 | 348 | 2.7113 |
| 2.3266 | 5.9928 | 418 | 2.6972 |
| 2.2489 | 6.9964 | 488 | 2.7195 |
| 2.1813 | 8.0 | 558 | 2.7573 |
| 2.2002 | 8.9892 | 627 | 2.7826 |
| 2.0955 | 9.8925 | 690 | 2.7929 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.1
- Pytorch 2.1.0+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1 | {"license": "apache-2.0", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "TheBloke/Mistral-7B-Instruct-v0.2-GPTQ", "model-index": [{"name": "shawgpt-ft", "results": []}]} | jaki-1/shawgpt-ft | null | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:TheBloke/Mistral-7B-Instruct-v0.2-GPTQ",
"license:apache-2.0",
"region:us"
] | null | 2024-05-03T14:10:59+00:00 | [] | [] | TAGS
#peft #tensorboard #safetensors #generated_from_trainer #base_model-TheBloke/Mistral-7B-Instruct-v0.2-GPTQ #license-apache-2.0 #region-us
| shawgpt-ft
==========
This model is a fine-tuned version of TheBloke/Mistral-7B-Instruct-v0.2-GPTQ on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 2.7929
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0002
* train\_batch\_size: 4
* eval\_batch\_size: 4
* seed: 42
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 16
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 2
* num\_epochs: 10
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* PEFT 0.10.0
* Transformers 4.40.1
* Pytorch 2.1.0+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2\n* num\\_epochs: 10\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.40.1\n* Pytorch 2.1.0+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#peft #tensorboard #safetensors #generated_from_trainer #base_model-TheBloke/Mistral-7B-Instruct-v0.2-GPTQ #license-apache-2.0 #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2\n* num\\_epochs: 10\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.40.1\n* Pytorch 2.1.0+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
55,
151,
5,
52
] | [
"TAGS\n#peft #tensorboard #safetensors #generated_from_trainer #base_model-TheBloke/Mistral-7B-Instruct-v0.2-GPTQ #license-apache-2.0 #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2\n* num\\_epochs: 10\n* mixed\\_precision\\_training: Native AMP### Training results### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.40.1\n* Pytorch 2.1.0+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"license": "apache-2.0", "library_name": "transformers", "basemodel": "Qwen/Qwen1.5-7B"} | YeungNLP/firefly-qwen1.5-en-7b-dpo-v0.1-unsloth | null | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-03T14:11:50+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #qwen2 #text-generation #conversational #arxiv-1910.09700 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #qwen2 #text-generation #conversational #arxiv-1910.09700 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
56,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#transformers #safetensors #qwen2 #text-generation #conversational #arxiv-1910.09700 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |
fill-mask | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Bert-MLM-fine-tuned-model
This model is a fine-tuned version of [google-bert/bert-base-multilingual-cased](https://huggingface.co/google-bert/bert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1592
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 12
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 2168 | 1.2200 |
| No log | 2.0 | 4336 | 1.1724 |
| No log | 3.0 | 6504 | 1.1592 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "google-bert/bert-base-multilingual-cased", "model-index": [{"name": "Bert-MLM-fine-tuned-model", "results": []}]} | AmalNlal/Bert-MLM-fine-tuned-model | null | [
"transformers",
"safetensors",
"bert",
"fill-mask",
"generated_from_trainer",
"base_model:google-bert/bert-base-multilingual-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T14:12:32+00:00 | [] | [] | TAGS
#transformers #safetensors #bert #fill-mask #generated_from_trainer #base_model-google-bert/bert-base-multilingual-cased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| Bert-MLM-fine-tuned-model
=========================
This model is a fine-tuned version of google-bert/bert-base-multilingual-cased on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 1.1592
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 12
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.39.3
* Pytorch 2.1.2
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 12\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.1.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #safetensors #bert #fill-mask #generated_from_trainer #base_model-google-bert/bert-base-multilingual-cased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 12\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.1.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
60,
101,
5,
40
] | [
"TAGS\n#transformers #safetensors #bert #fill-mask #generated_from_trainer #base_model-google-bert/bert-base-multilingual-cased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 12\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3### Training results### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.1.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ef_gpt_rad2bhc_testbest_model
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.967888048003899e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 123
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.40.1
- Pytorch 2.1.2
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "mit", "tags": ["generated_from_trainer"], "base_model": "gpt2", "model-index": [{"name": "ef_gpt_rad2bhc_testbest_model", "results": []}]} | frayline/ef_gpt_rad2bhc_testbest_model | null | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:gpt2",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-03T14:12:43+00:00 | [] | [] | TAGS
#transformers #safetensors #gpt2 #text-generation #generated_from_trainer #base_model-gpt2 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# ef_gpt_rad2bhc_testbest_model
This model is a fine-tuned version of gpt2 on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.967888048003899e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 123
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.40.1
- Pytorch 2.1.2
- Datasets 2.19.0
- Tokenizers 0.19.1
| [
"# ef_gpt_rad2bhc_testbest_model\n\nThis model is a fine-tuned version of gpt2 on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5.967888048003899e-05\n- train_batch_size: 4\n- eval_batch_size: 4\n- seed: 123\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 7\n- mixed_precision_training: Native AMP",
"### Framework versions\n\n- Transformers 4.40.1\n- Pytorch 2.1.2\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #safetensors #gpt2 #text-generation #generated_from_trainer #base_model-gpt2 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# ef_gpt_rad2bhc_testbest_model\n\nThis model is a fine-tuned version of gpt2 on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5.967888048003899e-05\n- train_batch_size: 4\n- eval_batch_size: 4\n- seed: 123\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 7\n- mixed_precision_training: Native AMP",
"### Framework versions\n\n- Transformers 4.40.1\n- Pytorch 2.1.2\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] | [
53,
36,
7,
9,
9,
4,
113,
40
] | [
"TAGS\n#transformers #safetensors #gpt2 #text-generation #generated_from_trainer #base_model-gpt2 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# ef_gpt_rad2bhc_testbest_model\n\nThis model is a fine-tuned version of gpt2 on an unknown dataset.## Model description\n\nMore information needed## Intended uses & limitations\n\nMore information needed## Training and evaluation data\n\nMore information needed## Training procedure### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5.967888048003899e-05\n- train_batch_size: 4\n- eval_batch_size: 4\n- seed: 123\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 7\n- mixed_precision_training: Native AMP### Framework versions\n\n- Transformers 4.40.1\n- Pytorch 2.1.2\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | OwOpeepeepoopoo/herewegoagain14c | null | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T14:13:49+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
41,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |
text2text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | beingbatman/blip2-opt-2.7b-rad-report-mimic-cxr-d1-old | null | [
"transformers",
"safetensors",
"blip",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T14:18:43+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #blip #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #blip #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
40,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#transformers #safetensors #blip #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_mouse_0-seqsight_32768_512_43M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_32768_512_43M](https://huggingface.co/mahdibaghbanzadeh/seqsight_32768_512_43M) on the [mahdibaghbanzadeh/GUE_mouse_0](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_0) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6113
- F1 Score: 0.7406
- Accuracy: 0.7407
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.6355 | 3.92 | 200 | 0.5919 | 0.6649 | 0.6654 |
| 0.5735 | 7.84 | 400 | 0.5729 | 0.7062 | 0.7062 |
| 0.546 | 11.76 | 600 | 0.5586 | 0.7217 | 0.7259 |
| 0.5193 | 15.69 | 800 | 0.5655 | 0.7173 | 0.7247 |
| 0.504 | 19.61 | 1000 | 0.5275 | 0.7514 | 0.7519 |
| 0.4849 | 23.53 | 1200 | 0.5448 | 0.7243 | 0.7296 |
| 0.4657 | 27.45 | 1400 | 0.5294 | 0.7481 | 0.7481 |
| 0.4528 | 31.37 | 1600 | 0.5494 | 0.7530 | 0.7531 |
| 0.4421 | 35.29 | 1800 | 0.5300 | 0.7654 | 0.7654 |
| 0.4255 | 39.22 | 2000 | 0.5357 | 0.7555 | 0.7556 |
| 0.4131 | 43.14 | 2200 | 0.5389 | 0.7581 | 0.7593 |
| 0.407 | 47.06 | 2400 | 0.5433 | 0.7637 | 0.7642 |
| 0.3896 | 50.98 | 2600 | 0.5581 | 0.7765 | 0.7765 |
| 0.3812 | 54.9 | 2800 | 0.5430 | 0.7636 | 0.7642 |
| 0.3687 | 58.82 | 3000 | 0.5724 | 0.7703 | 0.7704 |
| 0.3572 | 62.75 | 3200 | 0.5860 | 0.7740 | 0.7741 |
| 0.3475 | 66.67 | 3400 | 0.5887 | 0.7581 | 0.7580 |
| 0.3384 | 70.59 | 3600 | 0.6279 | 0.7642 | 0.7642 |
| 0.3251 | 74.51 | 3800 | 0.6395 | 0.7624 | 0.7642 |
| 0.3229 | 78.43 | 4000 | 0.6281 | 0.7729 | 0.7728 |
| 0.3096 | 82.35 | 4200 | 0.6224 | 0.7703 | 0.7704 |
| 0.3001 | 86.27 | 4400 | 0.6456 | 0.7637 | 0.7642 |
| 0.292 | 90.2 | 4600 | 0.6421 | 0.7580 | 0.7580 |
| 0.2874 | 94.12 | 4800 | 0.6674 | 0.7740 | 0.7741 |
| 0.2784 | 98.04 | 5000 | 0.6710 | 0.7712 | 0.7716 |
| 0.2713 | 101.96 | 5200 | 0.6843 | 0.7655 | 0.7654 |
| 0.2639 | 105.88 | 5400 | 0.7009 | 0.7642 | 0.7642 |
| 0.2593 | 109.8 | 5600 | 0.7156 | 0.7567 | 0.7568 |
| 0.2495 | 113.73 | 5800 | 0.6869 | 0.7713 | 0.7716 |
| 0.2462 | 117.65 | 6000 | 0.7264 | 0.7642 | 0.7642 |
| 0.2409 | 121.57 | 6200 | 0.7550 | 0.7580 | 0.7580 |
| 0.2326 | 125.49 | 6400 | 0.7553 | 0.7507 | 0.7506 |
| 0.2311 | 129.41 | 6600 | 0.7816 | 0.7630 | 0.7630 |
| 0.2269 | 133.33 | 6800 | 0.7690 | 0.7553 | 0.7556 |
| 0.2275 | 137.25 | 7000 | 0.7599 | 0.7531 | 0.7531 |
| 0.2204 | 141.18 | 7200 | 0.7752 | 0.7617 | 0.7617 |
| 0.2155 | 145.1 | 7400 | 0.8115 | 0.7580 | 0.7580 |
| 0.2138 | 149.02 | 7600 | 0.7925 | 0.7469 | 0.7469 |
| 0.2167 | 152.94 | 7800 | 0.7839 | 0.7506 | 0.7506 |
| 0.2052 | 156.86 | 8000 | 0.8067 | 0.7605 | 0.7605 |
| 0.2061 | 160.78 | 8200 | 0.8162 | 0.7593 | 0.7593 |
| 0.2059 | 164.71 | 8400 | 0.8187 | 0.7592 | 0.7593 |
| 0.2026 | 168.63 | 8600 | 0.8059 | 0.7556 | 0.7556 |
| 0.2012 | 172.55 | 8800 | 0.8118 | 0.7605 | 0.7605 |
| 0.2027 | 176.47 | 9000 | 0.8080 | 0.7580 | 0.7580 |
| 0.1975 | 180.39 | 9200 | 0.8110 | 0.7593 | 0.7593 |
| 0.1955 | 184.31 | 9400 | 0.8232 | 0.7593 | 0.7593 |
| 0.1953 | 188.24 | 9600 | 0.8376 | 0.7580 | 0.7580 |
| 0.1922 | 192.16 | 9800 | 0.8251 | 0.7605 | 0.7605 |
| 0.1944 | 196.08 | 10000 | 0.8256 | 0.7593 | 0.7593 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_32768_512_43M", "model-index": [{"name": "GUE_mouse_0-seqsight_32768_512_43M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_mouse_0-seqsight_32768_512_43M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_32768_512_43M",
"region:us"
] | null | 2024-05-03T14:22:26+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us
| GUE\_mouse\_0-seqsight\_32768\_512\_43M-L8\_f
=============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_32768\_512\_43M on the mahdibaghbanzadeh/GUE\_mouse\_0 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.6113
* F1 Score: 0.7406
* Accuracy: 0.7407
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
43,
100,
5,
52
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_mouse_1-seqsight_32768_512_43M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_32768_512_43M](https://huggingface.co/mahdibaghbanzadeh/seqsight_32768_512_43M) on the [mahdibaghbanzadeh/GUE_mouse_1](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_1) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2304
- F1 Score: 0.9014
- Accuracy: 0.9014
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.4439 | 0.47 | 200 | 0.3142 | 0.8587 | 0.8587 |
| 0.3305 | 0.95 | 400 | 0.3015 | 0.8683 | 0.8683 |
| 0.2983 | 1.42 | 600 | 0.2712 | 0.8774 | 0.8775 |
| 0.3006 | 1.9 | 800 | 0.2651 | 0.8809 | 0.8811 |
| 0.2827 | 2.37 | 1000 | 0.2585 | 0.8853 | 0.8854 |
| 0.2754 | 2.84 | 1200 | 0.2558 | 0.8874 | 0.8875 |
| 0.2592 | 3.32 | 1400 | 0.2508 | 0.8922 | 0.8922 |
| 0.2625 | 3.79 | 1600 | 0.2588 | 0.8916 | 0.8916 |
| 0.2587 | 4.27 | 1800 | 0.2435 | 0.8938 | 0.8940 |
| 0.2556 | 4.74 | 2000 | 0.2602 | 0.8913 | 0.8913 |
| 0.2508 | 5.21 | 2200 | 0.2437 | 0.8945 | 0.8947 |
| 0.2505 | 5.69 | 2400 | 0.2474 | 0.8967 | 0.8967 |
| 0.2447 | 6.16 | 2600 | 0.2382 | 0.8970 | 0.8971 |
| 0.2417 | 6.64 | 2800 | 0.2396 | 0.8973 | 0.8974 |
| 0.2418 | 7.11 | 3000 | 0.2443 | 0.8972 | 0.8973 |
| 0.2368 | 7.58 | 3200 | 0.2445 | 0.8996 | 0.8996 |
| 0.2414 | 8.06 | 3400 | 0.2315 | 0.9021 | 0.9021 |
| 0.2334 | 8.53 | 3600 | 0.2428 | 0.8978 | 0.8979 |
| 0.2387 | 9.0 | 3800 | 0.2370 | 0.9017 | 0.9017 |
| 0.2316 | 9.48 | 4000 | 0.2319 | 0.9041 | 0.9042 |
| 0.2371 | 9.95 | 4200 | 0.2291 | 0.9037 | 0.9038 |
| 0.2291 | 10.43 | 4400 | 0.2345 | 0.9032 | 0.9032 |
| 0.2284 | 10.9 | 4600 | 0.2371 | 0.9023 | 0.9023 |
| 0.2261 | 11.37 | 4800 | 0.2330 | 0.9030 | 0.9030 |
| 0.235 | 11.85 | 5000 | 0.2351 | 0.9021 | 0.9021 |
| 0.2269 | 12.32 | 5200 | 0.2345 | 0.9032 | 0.9032 |
| 0.2278 | 12.8 | 5400 | 0.2447 | 0.9010 | 0.9010 |
| 0.228 | 13.27 | 5600 | 0.2277 | 0.9033 | 0.9033 |
| 0.2227 | 13.74 | 5800 | 0.2296 | 0.9036 | 0.9036 |
| 0.2272 | 14.22 | 6000 | 0.2287 | 0.9048 | 0.9048 |
| 0.2201 | 14.69 | 6200 | 0.2288 | 0.9040 | 0.9041 |
| 0.2269 | 15.17 | 6400 | 0.2417 | 0.9002 | 0.9002 |
| 0.226 | 15.64 | 6600 | 0.2310 | 0.9032 | 0.9032 |
| 0.2209 | 16.11 | 6800 | 0.2297 | 0.9020 | 0.9021 |
| 0.2165 | 16.59 | 7000 | 0.2274 | 0.9058 | 0.9059 |
| 0.2246 | 17.06 | 7200 | 0.2272 | 0.9064 | 0.9064 |
| 0.2164 | 17.54 | 7400 | 0.2329 | 0.9039 | 0.9039 |
| 0.2211 | 18.01 | 7600 | 0.2240 | 0.9071 | 0.9072 |
| 0.216 | 18.48 | 7800 | 0.2279 | 0.9050 | 0.9050 |
| 0.2209 | 18.96 | 8000 | 0.2276 | 0.9048 | 0.9048 |
| 0.2183 | 19.43 | 8200 | 0.2277 | 0.9047 | 0.9047 |
| 0.2177 | 19.91 | 8400 | 0.2284 | 0.9034 | 0.9035 |
| 0.2193 | 20.38 | 8600 | 0.2263 | 0.9055 | 0.9056 |
| 0.2162 | 20.85 | 8800 | 0.2274 | 0.9040 | 0.9041 |
| 0.214 | 21.33 | 9000 | 0.2295 | 0.9051 | 0.9051 |
| 0.2134 | 21.8 | 9200 | 0.2289 | 0.9048 | 0.9048 |
| 0.218 | 22.27 | 9400 | 0.2275 | 0.9047 | 0.9047 |
| 0.2141 | 22.75 | 9600 | 0.2286 | 0.9058 | 0.9059 |
| 0.2154 | 23.22 | 9800 | 0.2281 | 0.9057 | 0.9057 |
| 0.2182 | 23.7 | 10000 | 0.2273 | 0.9057 | 0.9057 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_32768_512_43M", "model-index": [{"name": "GUE_mouse_1-seqsight_32768_512_43M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_mouse_1-seqsight_32768_512_43M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_32768_512_43M",
"region:us"
] | null | 2024-05-03T14:22:26+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us
| GUE\_mouse\_1-seqsight\_32768\_512\_43M-L8\_f
=============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_32768\_512\_43M on the mahdibaghbanzadeh/GUE\_mouse\_1 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2304
* F1 Score: 0.9014
* Accuracy: 0.9014
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
43,
100,
5,
52
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_mouse_1-seqsight_32768_512_43M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_32768_512_43M](https://huggingface.co/mahdibaghbanzadeh/seqsight_32768_512_43M) on the [mahdibaghbanzadeh/GUE_mouse_1](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_1) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2321
- F1 Score: 0.9021
- Accuracy: 0.9021
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.4146 | 0.47 | 200 | 0.2979 | 0.8655 | 0.8655 |
| 0.3089 | 0.95 | 400 | 0.2727 | 0.8817 | 0.8817 |
| 0.2776 | 1.42 | 600 | 0.2494 | 0.8899 | 0.8900 |
| 0.2737 | 1.9 | 800 | 0.2426 | 0.8961 | 0.8962 |
| 0.2594 | 2.37 | 1000 | 0.2409 | 0.8956 | 0.8956 |
| 0.2513 | 2.84 | 1200 | 0.2410 | 0.8947 | 0.8947 |
| 0.2397 | 3.32 | 1400 | 0.2422 | 0.8981 | 0.8981 |
| 0.2449 | 3.79 | 1600 | 0.2384 | 0.8989 | 0.8989 |
| 0.2416 | 4.27 | 1800 | 0.2324 | 0.9037 | 0.9038 |
| 0.2398 | 4.74 | 2000 | 0.2481 | 0.8972 | 0.8973 |
| 0.2349 | 5.21 | 2200 | 0.2360 | 0.8999 | 0.9001 |
| 0.2335 | 5.69 | 2400 | 0.2385 | 0.8987 | 0.8987 |
| 0.2281 | 6.16 | 2600 | 0.2312 | 0.9029 | 0.9030 |
| 0.2258 | 6.64 | 2800 | 0.2312 | 0.8998 | 0.8999 |
| 0.2247 | 7.11 | 3000 | 0.2352 | 0.9019 | 0.9020 |
| 0.2201 | 7.58 | 3200 | 0.2347 | 0.9020 | 0.9020 |
| 0.2255 | 8.06 | 3400 | 0.2230 | 0.9054 | 0.9056 |
| 0.215 | 8.53 | 3600 | 0.2267 | 0.9033 | 0.9033 |
| 0.2223 | 9.0 | 3800 | 0.2280 | 0.9046 | 0.9047 |
| 0.2128 | 9.48 | 4000 | 0.2226 | 0.9096 | 0.9097 |
| 0.218 | 9.95 | 4200 | 0.2198 | 0.9067 | 0.9067 |
| 0.2087 | 10.43 | 4400 | 0.2301 | 0.9075 | 0.9075 |
| 0.2088 | 10.9 | 4600 | 0.2277 | 0.9072 | 0.9072 |
| 0.2071 | 11.37 | 4800 | 0.2231 | 0.9075 | 0.9075 |
| 0.2124 | 11.85 | 5000 | 0.2289 | 0.9043 | 0.9044 |
| 0.2061 | 12.32 | 5200 | 0.2254 | 0.9048 | 0.9048 |
| 0.2073 | 12.8 | 5400 | 0.2354 | 0.9039 | 0.9039 |
| 0.2032 | 13.27 | 5600 | 0.2220 | 0.9086 | 0.9087 |
| 0.2013 | 13.74 | 5800 | 0.2282 | 0.9054 | 0.9054 |
| 0.2061 | 14.22 | 6000 | 0.2203 | 0.9085 | 0.9085 |
| 0.1977 | 14.69 | 6200 | 0.2282 | 0.9085 | 0.9085 |
| 0.2024 | 15.17 | 6400 | 0.2332 | 0.9063 | 0.9063 |
| 0.2018 | 15.64 | 6600 | 0.2248 | 0.9077 | 0.9078 |
| 0.1963 | 16.11 | 6800 | 0.2276 | 0.9043 | 0.9044 |
| 0.1912 | 16.59 | 7000 | 0.2229 | 0.9100 | 0.9100 |
| 0.2007 | 17.06 | 7200 | 0.2215 | 0.9106 | 0.9106 |
| 0.1888 | 17.54 | 7400 | 0.2275 | 0.9097 | 0.9097 |
| 0.1976 | 18.01 | 7600 | 0.2204 | 0.9104 | 0.9105 |
| 0.1896 | 18.48 | 7800 | 0.2251 | 0.9113 | 0.9113 |
| 0.1934 | 18.96 | 8000 | 0.2200 | 0.9088 | 0.9088 |
| 0.1926 | 19.43 | 8200 | 0.2247 | 0.9075 | 0.9075 |
| 0.1892 | 19.91 | 8400 | 0.2316 | 0.9062 | 0.9063 |
| 0.1916 | 20.38 | 8600 | 0.2280 | 0.9072 | 0.9072 |
| 0.1894 | 20.85 | 8800 | 0.2277 | 0.9065 | 0.9066 |
| 0.1864 | 21.33 | 9000 | 0.2292 | 0.9082 | 0.9082 |
| 0.185 | 21.8 | 9200 | 0.2275 | 0.9095 | 0.9096 |
| 0.1918 | 22.27 | 9400 | 0.2245 | 0.9081 | 0.9081 |
| 0.1856 | 22.75 | 9600 | 0.2276 | 0.9074 | 0.9075 |
| 0.1853 | 23.22 | 9800 | 0.2281 | 0.9086 | 0.9087 |
| 0.1874 | 23.7 | 10000 | 0.2269 | 0.9091 | 0.9091 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_32768_512_43M", "model-index": [{"name": "GUE_mouse_1-seqsight_32768_512_43M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_mouse_1-seqsight_32768_512_43M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_32768_512_43M",
"region:us"
] | null | 2024-05-03T14:22:26+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us
| GUE\_mouse\_1-seqsight\_32768\_512\_43M-L32\_f
==============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_32768\_512\_43M on the mahdibaghbanzadeh/GUE\_mouse\_1 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2321
* F1 Score: 0.9021
* Accuracy: 0.9021
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
43,
100,
5,
52
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K4me2-seqsight_16384_512_34M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_34M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_34M) on the [mahdibaghbanzadeh/GUE_EMP_H3K4me2](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K4me2) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5985
- F1 Score: 0.6801
- Accuracy: 0.6790
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.6491 | 1.04 | 200 | 0.6203 | 0.6358 | 0.6595 |
| 0.6139 | 2.08 | 400 | 0.6525 | 0.6229 | 0.6227 |
| 0.6056 | 3.12 | 600 | 0.6067 | 0.6758 | 0.6761 |
| 0.5978 | 4.17 | 800 | 0.6232 | 0.6649 | 0.6624 |
| 0.5883 | 5.21 | 1000 | 0.6098 | 0.6699 | 0.6680 |
| 0.5829 | 6.25 | 1200 | 0.6085 | 0.6710 | 0.6689 |
| 0.5722 | 7.29 | 1400 | 0.5984 | 0.6777 | 0.6764 |
| 0.5685 | 8.33 | 1600 | 0.6231 | 0.6688 | 0.6663 |
| 0.5613 | 9.38 | 1800 | 0.6276 | 0.6632 | 0.6608 |
| 0.5524 | 10.42 | 2000 | 0.6044 | 0.6773 | 0.6768 |
| 0.5508 | 11.46 | 2200 | 0.6039 | 0.6693 | 0.6712 |
| 0.5345 | 12.5 | 2400 | 0.6154 | 0.6766 | 0.6761 |
| 0.5272 | 13.54 | 2600 | 0.6787 | 0.6570 | 0.6546 |
| 0.5231 | 14.58 | 2800 | 0.6346 | 0.6727 | 0.6716 |
| 0.5208 | 15.62 | 3000 | 0.6651 | 0.6665 | 0.6641 |
| 0.5072 | 16.67 | 3200 | 0.6459 | 0.6751 | 0.6735 |
| 0.4981 | 17.71 | 3400 | 0.7131 | 0.6359 | 0.6351 |
| 0.4907 | 18.75 | 3600 | 0.6785 | 0.6630 | 0.6605 |
| 0.4875 | 19.79 | 3800 | 0.6916 | 0.6662 | 0.6637 |
| 0.475 | 20.83 | 4000 | 0.6827 | 0.6731 | 0.6709 |
| 0.4781 | 21.88 | 4200 | 0.7175 | 0.6698 | 0.6673 |
| 0.4623 | 22.92 | 4400 | 0.7095 | 0.6615 | 0.6592 |
| 0.4578 | 23.96 | 4600 | 0.7411 | 0.6587 | 0.6572 |
| 0.4513 | 25.0 | 4800 | 0.7685 | 0.6460 | 0.6455 |
| 0.4443 | 26.04 | 5000 | 0.7568 | 0.6595 | 0.6572 |
| 0.4349 | 27.08 | 5200 | 0.7462 | 0.6687 | 0.6663 |
| 0.4334 | 28.12 | 5400 | 0.7394 | 0.6627 | 0.6601 |
| 0.4244 | 29.17 | 5600 | 0.7322 | 0.6685 | 0.6660 |
| 0.4155 | 30.21 | 5800 | 0.8332 | 0.6513 | 0.6500 |
| 0.4177 | 31.25 | 6000 | 0.7752 | 0.6671 | 0.6647 |
| 0.4104 | 32.29 | 6200 | 0.7569 | 0.6699 | 0.6676 |
| 0.4023 | 33.33 | 6400 | 0.7934 | 0.6654 | 0.6631 |
| 0.3926 | 34.38 | 6600 | 0.7863 | 0.6650 | 0.6624 |
| 0.3932 | 35.42 | 6800 | 0.8033 | 0.6585 | 0.6566 |
| 0.3864 | 36.46 | 7000 | 0.8465 | 0.6496 | 0.6487 |
| 0.3869 | 37.5 | 7200 | 0.7903 | 0.6705 | 0.6680 |
| 0.3732 | 38.54 | 7400 | 0.8169 | 0.6601 | 0.6575 |
| 0.3763 | 39.58 | 7600 | 0.8006 | 0.6720 | 0.6696 |
| 0.376 | 40.62 | 7800 | 0.8370 | 0.6606 | 0.6582 |
| 0.3687 | 41.67 | 8000 | 0.8467 | 0.6533 | 0.6514 |
| 0.3625 | 42.71 | 8200 | 0.8433 | 0.6672 | 0.6647 |
| 0.3657 | 43.75 | 8400 | 0.8259 | 0.6627 | 0.6601 |
| 0.3587 | 44.79 | 8600 | 0.8573 | 0.6622 | 0.6598 |
| 0.3544 | 45.83 | 8800 | 0.8594 | 0.6674 | 0.6654 |
| 0.3594 | 46.88 | 9000 | 0.8783 | 0.6523 | 0.6507 |
| 0.3547 | 47.92 | 9200 | 0.8590 | 0.6667 | 0.6644 |
| 0.3543 | 48.96 | 9400 | 0.8408 | 0.6653 | 0.6628 |
| 0.3477 | 50.0 | 9600 | 0.8791 | 0.6603 | 0.6582 |
| 0.3496 | 51.04 | 9800 | 0.8755 | 0.6626 | 0.6605 |
| 0.3474 | 52.08 | 10000 | 0.8741 | 0.6643 | 0.6621 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_34M", "model-index": [{"name": "GUE_EMP_H3K4me2-seqsight_16384_512_34M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K4me2-seqsight_16384_512_34M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_16384_512_34M",
"region:us"
] | null | 2024-05-03T14:22:26+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_34M #region-us
| GUE\_EMP\_H3K4me2-seqsight\_16384\_512\_34M-L32\_f
==================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_34M on the mahdibaghbanzadeh/GUE\_EMP\_H3K4me2 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5985
* F1 Score: 0.6801
* Accuracy: 0.6790
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_34M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
42,
100,
5,
52
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_34M #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_mouse_1-seqsight_32768_512_43M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_32768_512_43M](https://huggingface.co/mahdibaghbanzadeh/seqsight_32768_512_43M) on the [mahdibaghbanzadeh/GUE_mouse_1](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_1) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2537
- F1 Score: 0.8863
- Accuracy: 0.8863
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.4999 | 0.47 | 200 | 0.3566 | 0.8414 | 0.8415 |
| 0.3821 | 0.95 | 400 | 0.3269 | 0.8540 | 0.8540 |
| 0.3435 | 1.42 | 600 | 0.3149 | 0.8621 | 0.8621 |
| 0.3439 | 1.9 | 800 | 0.2987 | 0.8657 | 0.8658 |
| 0.3217 | 2.37 | 1000 | 0.2985 | 0.8696 | 0.8697 |
| 0.3167 | 2.84 | 1200 | 0.2899 | 0.8747 | 0.8747 |
| 0.3008 | 3.32 | 1400 | 0.2855 | 0.8753 | 0.8753 |
| 0.3047 | 3.79 | 1600 | 0.2820 | 0.8743 | 0.8743 |
| 0.3011 | 4.27 | 1800 | 0.2769 | 0.8753 | 0.8756 |
| 0.2974 | 4.74 | 2000 | 0.2785 | 0.8766 | 0.8766 |
| 0.2889 | 5.21 | 2200 | 0.2745 | 0.8787 | 0.8789 |
| 0.2904 | 5.69 | 2400 | 0.2695 | 0.8811 | 0.8811 |
| 0.2861 | 6.16 | 2600 | 0.2652 | 0.8832 | 0.8833 |
| 0.2802 | 6.64 | 2800 | 0.2637 | 0.8836 | 0.8836 |
| 0.2799 | 7.11 | 3000 | 0.2654 | 0.8854 | 0.8854 |
| 0.2747 | 7.58 | 3200 | 0.2661 | 0.8826 | 0.8826 |
| 0.2781 | 8.06 | 3400 | 0.2605 | 0.8858 | 0.8858 |
| 0.2735 | 8.53 | 3600 | 0.2614 | 0.8866 | 0.8866 |
| 0.2769 | 9.0 | 3800 | 0.2575 | 0.8857 | 0.8857 |
| 0.2701 | 9.48 | 4000 | 0.2636 | 0.8861 | 0.8863 |
| 0.2722 | 9.95 | 4200 | 0.2549 | 0.8870 | 0.8870 |
| 0.2673 | 10.43 | 4400 | 0.2560 | 0.8879 | 0.8879 |
| 0.2675 | 10.9 | 4600 | 0.2562 | 0.8910 | 0.8910 |
| 0.263 | 11.37 | 4800 | 0.2539 | 0.8915 | 0.8915 |
| 0.2721 | 11.85 | 5000 | 0.2533 | 0.8898 | 0.8898 |
| 0.2635 | 12.32 | 5200 | 0.2581 | 0.8909 | 0.8909 |
| 0.266 | 12.8 | 5400 | 0.2528 | 0.8918 | 0.8918 |
| 0.2653 | 13.27 | 5600 | 0.2513 | 0.8908 | 0.8909 |
| 0.2598 | 13.74 | 5800 | 0.2509 | 0.8925 | 0.8925 |
| 0.2636 | 14.22 | 6000 | 0.2504 | 0.8910 | 0.8910 |
| 0.2583 | 14.69 | 6200 | 0.2515 | 0.8922 | 0.8922 |
| 0.2655 | 15.17 | 6400 | 0.2550 | 0.8919 | 0.8919 |
| 0.2624 | 15.64 | 6600 | 0.2504 | 0.8929 | 0.8930 |
| 0.2599 | 16.11 | 6800 | 0.2479 | 0.8927 | 0.8928 |
| 0.2571 | 16.59 | 7000 | 0.2486 | 0.8953 | 0.8953 |
| 0.2601 | 17.06 | 7200 | 0.2470 | 0.8949 | 0.8949 |
| 0.2594 | 17.54 | 7400 | 0.2496 | 0.8941 | 0.8941 |
| 0.2594 | 18.01 | 7600 | 0.2467 | 0.8950 | 0.8950 |
| 0.2571 | 18.48 | 7800 | 0.2490 | 0.8941 | 0.8941 |
| 0.2604 | 18.96 | 8000 | 0.2473 | 0.8941 | 0.8941 |
| 0.2565 | 19.43 | 8200 | 0.2475 | 0.8965 | 0.8965 |
| 0.2572 | 19.91 | 8400 | 0.2460 | 0.8936 | 0.8937 |
| 0.257 | 20.38 | 8600 | 0.2465 | 0.8970 | 0.8970 |
| 0.2561 | 20.85 | 8800 | 0.2463 | 0.8956 | 0.8956 |
| 0.2526 | 21.33 | 9000 | 0.2475 | 0.8964 | 0.8964 |
| 0.2531 | 21.8 | 9200 | 0.2478 | 0.8964 | 0.8964 |
| 0.2591 | 22.27 | 9400 | 0.2469 | 0.8959 | 0.8959 |
| 0.254 | 22.75 | 9600 | 0.2462 | 0.8963 | 0.8964 |
| 0.2574 | 23.22 | 9800 | 0.2465 | 0.8959 | 0.8959 |
| 0.2575 | 23.7 | 10000 | 0.2464 | 0.8965 | 0.8965 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_32768_512_43M", "model-index": [{"name": "GUE_mouse_1-seqsight_32768_512_43M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_mouse_1-seqsight_32768_512_43M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_32768_512_43M",
"region:us"
] | null | 2024-05-03T14:22:26+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us
| GUE\_mouse\_1-seqsight\_32768\_512\_43M-L1\_f
=============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_32768\_512\_43M on the mahdibaghbanzadeh/GUE\_mouse\_1 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2537
* F1 Score: 0.8863
* Accuracy: 0.8863
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
43,
100,
5,
52
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_mouse_4-seqsight_32768_512_43M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_32768_512_43M](https://huggingface.co/mahdibaghbanzadeh/seqsight_32768_512_43M) on the [mahdibaghbanzadeh/GUE_mouse_4](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_4) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5964
- F1 Score: 0.6939
- Accuracy: 0.6941
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.6317 | 1.69 | 200 | 0.6029 | 0.6769 | 0.6782 |
| 0.5874 | 3.39 | 400 | 0.5773 | 0.6854 | 0.6883 |
| 0.5618 | 5.08 | 600 | 0.5587 | 0.7217 | 0.7217 |
| 0.546 | 6.78 | 800 | 0.5472 | 0.7191 | 0.7191 |
| 0.5315 | 8.47 | 1000 | 0.5645 | 0.7062 | 0.7079 |
| 0.5124 | 10.17 | 1200 | 0.5576 | 0.7169 | 0.7169 |
| 0.5043 | 11.86 | 1400 | 0.5609 | 0.7185 | 0.7185 |
| 0.484 | 13.56 | 1600 | 0.5663 | 0.7211 | 0.7212 |
| 0.4742 | 15.25 | 1800 | 0.5853 | 0.7216 | 0.7217 |
| 0.4602 | 16.95 | 2000 | 0.5788 | 0.7080 | 0.7100 |
| 0.4476 | 18.64 | 2200 | 0.5648 | 0.7223 | 0.7223 |
| 0.43 | 20.34 | 2400 | 0.6184 | 0.7188 | 0.7191 |
| 0.4208 | 22.03 | 2600 | 0.6134 | 0.7186 | 0.7185 |
| 0.4023 | 23.73 | 2800 | 0.6485 | 0.7131 | 0.7132 |
| 0.3945 | 25.42 | 3000 | 0.6664 | 0.7175 | 0.7191 |
| 0.3858 | 27.12 | 3200 | 0.6884 | 0.7117 | 0.7116 |
| 0.3721 | 28.81 | 3400 | 0.6782 | 0.7178 | 0.7180 |
| 0.3564 | 30.51 | 3600 | 0.7114 | 0.7149 | 0.7148 |
| 0.3545 | 32.2 | 3800 | 0.7071 | 0.7078 | 0.7084 |
| 0.3425 | 33.9 | 4000 | 0.7339 | 0.7028 | 0.7037 |
| 0.3318 | 35.59 | 4200 | 0.7366 | 0.7064 | 0.7063 |
| 0.3185 | 37.29 | 4400 | 0.7456 | 0.7069 | 0.7069 |
| 0.3079 | 38.98 | 4600 | 0.7941 | 0.6991 | 0.6994 |
| 0.3018 | 40.68 | 4800 | 0.7694 | 0.7078 | 0.7079 |
| 0.2917 | 42.37 | 5000 | 0.8322 | 0.7068 | 0.7069 |
| 0.2878 | 44.07 | 5200 | 0.8276 | 0.7047 | 0.7047 |
| 0.2823 | 45.76 | 5400 | 0.8373 | 0.7067 | 0.7069 |
| 0.269 | 47.46 | 5600 | 0.8612 | 0.7080 | 0.7079 |
| 0.2698 | 49.15 | 5800 | 0.8352 | 0.7026 | 0.7026 |
| 0.2556 | 50.85 | 6000 | 0.8812 | 0.7069 | 0.7069 |
| 0.2476 | 52.54 | 6200 | 0.9216 | 0.7036 | 0.7042 |
| 0.2524 | 54.24 | 6400 | 0.8557 | 0.7091 | 0.7090 |
| 0.2501 | 55.93 | 6600 | 0.9083 | 0.7051 | 0.7053 |
| 0.2378 | 57.63 | 6800 | 0.9140 | 0.7107 | 0.7106 |
| 0.2333 | 59.32 | 7000 | 0.9598 | 0.7017 | 0.7015 |
| 0.2363 | 61.02 | 7200 | 0.8962 | 0.7042 | 0.7042 |
| 0.2248 | 62.71 | 7400 | 0.9316 | 0.6979 | 0.6978 |
| 0.225 | 64.41 | 7600 | 0.9546 | 0.7116 | 0.7116 |
| 0.2202 | 66.1 | 7800 | 0.9617 | 0.7048 | 0.7047 |
| 0.2195 | 67.8 | 8000 | 0.9515 | 0.7048 | 0.7047 |
| 0.2144 | 69.49 | 8200 | 0.9789 | 0.6995 | 0.6994 |
| 0.2103 | 71.19 | 8400 | 0.9751 | 0.7053 | 0.7053 |
| 0.2104 | 72.88 | 8600 | 0.9788 | 0.7080 | 0.7084 |
| 0.2051 | 74.58 | 8800 | 0.9890 | 0.7080 | 0.7079 |
| 0.2035 | 76.27 | 9000 | 0.9850 | 0.7080 | 0.7079 |
| 0.2051 | 77.97 | 9200 | 0.9909 | 0.7026 | 0.7026 |
| 0.1999 | 79.66 | 9400 | 1.0087 | 0.7080 | 0.7079 |
| 0.1925 | 81.36 | 9600 | 1.0246 | 0.7059 | 0.7058 |
| 0.1992 | 83.05 | 9800 | 1.0152 | 0.7058 | 0.7058 |
| 0.1987 | 84.75 | 10000 | 1.0124 | 0.7059 | 0.7058 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_32768_512_43M", "model-index": [{"name": "GUE_mouse_4-seqsight_32768_512_43M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_mouse_4-seqsight_32768_512_43M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_32768_512_43M",
"region:us"
] | null | 2024-05-03T14:22:28+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us
| GUE\_mouse\_4-seqsight\_32768\_512\_43M-L32\_f
==============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_32768\_512\_43M on the mahdibaghbanzadeh/GUE\_mouse\_4 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5964
* F1 Score: 0.6939
* Accuracy: 0.6941
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
43,
100,
5,
52
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_mouse_4-seqsight_32768_512_43M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_32768_512_43M](https://huggingface.co/mahdibaghbanzadeh/seqsight_32768_512_43M) on the [mahdibaghbanzadeh/GUE_mouse_4](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_4) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5855
- F1 Score: 0.6982
- Accuracy: 0.6984
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.6573 | 1.69 | 200 | 0.6152 | 0.6667 | 0.6670 |
| 0.617 | 3.39 | 400 | 0.6072 | 0.6717 | 0.6734 |
| 0.6032 | 5.08 | 600 | 0.5950 | 0.6811 | 0.6814 |
| 0.5952 | 6.78 | 800 | 0.5895 | 0.6790 | 0.6798 |
| 0.5877 | 8.47 | 1000 | 0.5817 | 0.6888 | 0.6888 |
| 0.5811 | 10.17 | 1200 | 0.5810 | 0.6975 | 0.6978 |
| 0.5765 | 11.86 | 1400 | 0.5763 | 0.6979 | 0.6978 |
| 0.5709 | 13.56 | 1600 | 0.5743 | 0.6985 | 0.6984 |
| 0.569 | 15.25 | 1800 | 0.5730 | 0.7000 | 0.6999 |
| 0.5642 | 16.95 | 2000 | 0.5742 | 0.6965 | 0.6989 |
| 0.5603 | 18.64 | 2200 | 0.5668 | 0.7050 | 0.7058 |
| 0.5573 | 20.34 | 2400 | 0.5644 | 0.7086 | 0.7084 |
| 0.557 | 22.03 | 2600 | 0.5865 | 0.6860 | 0.6920 |
| 0.5545 | 23.73 | 2800 | 0.5598 | 0.7162 | 0.7164 |
| 0.5478 | 25.42 | 3000 | 0.5631 | 0.7170 | 0.7169 |
| 0.549 | 27.12 | 3200 | 0.5689 | 0.7078 | 0.7100 |
| 0.5459 | 28.81 | 3400 | 0.5574 | 0.7230 | 0.7233 |
| 0.5431 | 30.51 | 3600 | 0.5561 | 0.7260 | 0.7260 |
| 0.5424 | 32.2 | 3800 | 0.5564 | 0.7277 | 0.7281 |
| 0.5412 | 33.9 | 4000 | 0.5610 | 0.7200 | 0.7212 |
| 0.536 | 35.59 | 4200 | 0.5696 | 0.7193 | 0.7212 |
| 0.5323 | 37.29 | 4400 | 0.5608 | 0.7275 | 0.7281 |
| 0.5336 | 38.98 | 4600 | 0.5550 | 0.7223 | 0.7223 |
| 0.5308 | 40.68 | 4800 | 0.5746 | 0.7186 | 0.7212 |
| 0.5289 | 42.37 | 5000 | 0.5629 | 0.7270 | 0.7276 |
| 0.531 | 44.07 | 5200 | 0.5628 | 0.7209 | 0.7217 |
| 0.5281 | 45.76 | 5400 | 0.5583 | 0.7216 | 0.7223 |
| 0.524 | 47.46 | 5600 | 0.5593 | 0.7274 | 0.7276 |
| 0.5263 | 49.15 | 5800 | 0.5593 | 0.7210 | 0.7223 |
| 0.5239 | 50.85 | 6000 | 0.5602 | 0.7201 | 0.7212 |
| 0.5232 | 52.54 | 6200 | 0.5573 | 0.7313 | 0.7313 |
| 0.5219 | 54.24 | 6400 | 0.5546 | 0.7223 | 0.7228 |
| 0.5235 | 55.93 | 6600 | 0.5543 | 0.7246 | 0.7249 |
| 0.5205 | 57.63 | 6800 | 0.5516 | 0.7265 | 0.7265 |
| 0.5165 | 59.32 | 7000 | 0.5595 | 0.7258 | 0.7265 |
| 0.5208 | 61.02 | 7200 | 0.5550 | 0.7273 | 0.7276 |
| 0.5159 | 62.71 | 7400 | 0.5577 | 0.7278 | 0.7281 |
| 0.519 | 64.41 | 7600 | 0.5556 | 0.7250 | 0.7254 |
| 0.519 | 66.1 | 7800 | 0.5561 | 0.7229 | 0.7233 |
| 0.516 | 67.8 | 8000 | 0.5546 | 0.7258 | 0.7260 |
| 0.5173 | 69.49 | 8200 | 0.5528 | 0.7294 | 0.7297 |
| 0.5151 | 71.19 | 8400 | 0.5541 | 0.7278 | 0.7281 |
| 0.5159 | 72.88 | 8600 | 0.5528 | 0.7275 | 0.7276 |
| 0.5113 | 74.58 | 8800 | 0.5565 | 0.7266 | 0.7270 |
| 0.5141 | 76.27 | 9000 | 0.5574 | 0.7265 | 0.7270 |
| 0.5157 | 77.97 | 9200 | 0.5563 | 0.7265 | 0.7270 |
| 0.5129 | 79.66 | 9400 | 0.5548 | 0.7258 | 0.7260 |
| 0.5146 | 81.36 | 9600 | 0.5545 | 0.7274 | 0.7276 |
| 0.516 | 83.05 | 9800 | 0.5547 | 0.7267 | 0.7270 |
| 0.5146 | 84.75 | 10000 | 0.5553 | 0.7277 | 0.7281 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_32768_512_43M", "model-index": [{"name": "GUE_mouse_4-seqsight_32768_512_43M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_mouse_4-seqsight_32768_512_43M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_32768_512_43M",
"region:us"
] | null | 2024-05-03T14:22:28+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us
| GUE\_mouse\_4-seqsight\_32768\_512\_43M-L1\_f
=============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_32768\_512\_43M on the mahdibaghbanzadeh/GUE\_mouse\_4 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5855
* F1 Score: 0.6982
* Accuracy: 0.6984
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
43,
100,
5,
52
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_mouse_0-seqsight_32768_512_43M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_32768_512_43M](https://huggingface.co/mahdibaghbanzadeh/seqsight_32768_512_43M) on the [mahdibaghbanzadeh/GUE_mouse_0](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_0) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6723
- F1 Score: 0.7295
- Accuracy: 0.7296
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.6203 | 3.92 | 200 | 0.5787 | 0.6882 | 0.6901 |
| 0.5482 | 7.84 | 400 | 0.5451 | 0.7405 | 0.7407 |
| 0.5042 | 11.76 | 600 | 0.5385 | 0.7254 | 0.7296 |
| 0.4591 | 15.69 | 800 | 0.5251 | 0.7574 | 0.7580 |
| 0.4232 | 19.61 | 1000 | 0.5538 | 0.7651 | 0.7654 |
| 0.3842 | 23.53 | 1200 | 0.6002 | 0.7579 | 0.7580 |
| 0.3437 | 27.45 | 1400 | 0.5976 | 0.7667 | 0.7667 |
| 0.3056 | 31.37 | 1600 | 0.7249 | 0.7505 | 0.7519 |
| 0.2826 | 35.29 | 1800 | 0.7146 | 0.7504 | 0.7506 |
| 0.2464 | 39.22 | 2000 | 0.7265 | 0.7507 | 0.7506 |
| 0.2234 | 43.14 | 2200 | 0.7985 | 0.7519 | 0.7519 |
| 0.2081 | 47.06 | 2400 | 0.8031 | 0.7567 | 0.7568 |
| 0.1827 | 50.98 | 2600 | 0.8586 | 0.7566 | 0.7568 |
| 0.1681 | 54.9 | 2800 | 0.9018 | 0.7456 | 0.7457 |
| 0.1544 | 58.82 | 3000 | 0.9888 | 0.7405 | 0.7407 |
| 0.1384 | 62.75 | 3200 | 1.0131 | 0.7494 | 0.7494 |
| 0.132 | 66.67 | 3400 | 1.0273 | 0.7579 | 0.7580 |
| 0.12 | 70.59 | 3600 | 1.0749 | 0.7403 | 0.7407 |
| 0.1132 | 74.51 | 3800 | 1.0950 | 0.7358 | 0.7358 |
| 0.1097 | 78.43 | 4000 | 1.1436 | 0.7378 | 0.7383 |
| 0.099 | 82.35 | 4200 | 1.1471 | 0.7334 | 0.7333 |
| 0.0953 | 86.27 | 4400 | 1.2057 | 0.7407 | 0.7407 |
| 0.0929 | 90.2 | 4600 | 1.1777 | 0.7419 | 0.7420 |
| 0.0891 | 94.12 | 4800 | 1.1411 | 0.7430 | 0.7432 |
| 0.0793 | 98.04 | 5000 | 1.2071 | 0.7445 | 0.7444 |
| 0.0767 | 101.96 | 5200 | 1.1752 | 0.7383 | 0.7383 |
| 0.0773 | 105.88 | 5400 | 1.1790 | 0.7370 | 0.7370 |
| 0.0706 | 109.8 | 5600 | 1.2906 | 0.7456 | 0.7457 |
| 0.0666 | 113.73 | 5800 | 1.2703 | 0.7382 | 0.7383 |
| 0.0649 | 117.65 | 6000 | 1.2504 | 0.7457 | 0.7457 |
| 0.0625 | 121.57 | 6200 | 1.2919 | 0.7432 | 0.7432 |
| 0.0597 | 125.49 | 6400 | 1.3010 | 0.7290 | 0.7296 |
| 0.0601 | 129.41 | 6600 | 1.3406 | 0.7480 | 0.7481 |
| 0.0552 | 133.33 | 6800 | 1.3789 | 0.7395 | 0.7395 |
| 0.0568 | 137.25 | 7000 | 1.3155 | 0.7432 | 0.7432 |
| 0.0524 | 141.18 | 7200 | 1.3413 | 0.7321 | 0.7321 |
| 0.0507 | 145.1 | 7400 | 1.3864 | 0.7445 | 0.7444 |
| 0.0483 | 149.02 | 7600 | 1.3598 | 0.7407 | 0.7407 |
| 0.0474 | 152.94 | 7800 | 1.3785 | 0.7445 | 0.7444 |
| 0.0417 | 156.86 | 8000 | 1.4386 | 0.7469 | 0.7469 |
| 0.0474 | 160.78 | 8200 | 1.3778 | 0.7432 | 0.7432 |
| 0.0443 | 164.71 | 8400 | 1.4425 | 0.7305 | 0.7309 |
| 0.0456 | 168.63 | 8600 | 1.4659 | 0.7407 | 0.7407 |
| 0.0446 | 172.55 | 8800 | 1.3911 | 0.7395 | 0.7395 |
| 0.0402 | 176.47 | 9000 | 1.4602 | 0.7395 | 0.7395 |
| 0.0412 | 180.39 | 9200 | 1.4362 | 0.7408 | 0.7407 |
| 0.0387 | 184.31 | 9400 | 1.4318 | 0.7371 | 0.7370 |
| 0.0423 | 188.24 | 9600 | 1.4263 | 0.7432 | 0.7432 |
| 0.0407 | 192.16 | 9800 | 1.4117 | 0.7432 | 0.7432 |
| 0.0403 | 196.08 | 10000 | 1.4189 | 0.7432 | 0.7432 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_32768_512_43M", "model-index": [{"name": "GUE_mouse_0-seqsight_32768_512_43M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_mouse_0-seqsight_32768_512_43M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_32768_512_43M",
"region:us"
] | null | 2024-05-03T14:22:28+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us
| GUE\_mouse\_0-seqsight\_32768\_512\_43M-L32\_f
==============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_32768\_512\_43M on the mahdibaghbanzadeh/GUE\_mouse\_0 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.6723
* F1 Score: 0.7295
* Accuracy: 0.7296
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
43,
100,
5,
52
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_mouse_4-seqsight_32768_512_43M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_32768_512_43M](https://huggingface.co/mahdibaghbanzadeh/seqsight_32768_512_43M) on the [mahdibaghbanzadeh/GUE_mouse_4](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_4) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5862
- F1 Score: 0.6914
- Accuracy: 0.6936
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.6405 | 1.69 | 200 | 0.6056 | 0.6719 | 0.6729 |
| 0.5998 | 3.39 | 400 | 0.5973 | 0.6791 | 0.6835 |
| 0.5818 | 5.08 | 600 | 0.5752 | 0.6997 | 0.6999 |
| 0.5719 | 6.78 | 800 | 0.5671 | 0.7079 | 0.7079 |
| 0.5608 | 8.47 | 1000 | 0.5733 | 0.7042 | 0.7063 |
| 0.549 | 10.17 | 1200 | 0.5616 | 0.7134 | 0.7138 |
| 0.5429 | 11.86 | 1400 | 0.5570 | 0.7118 | 0.7122 |
| 0.5324 | 13.56 | 1600 | 0.5578 | 0.7197 | 0.7201 |
| 0.5292 | 15.25 | 1800 | 0.5583 | 0.7215 | 0.7217 |
| 0.5207 | 16.95 | 2000 | 0.5601 | 0.7164 | 0.7191 |
| 0.5177 | 18.64 | 2200 | 0.5518 | 0.7150 | 0.7164 |
| 0.5134 | 20.34 | 2400 | 0.5586 | 0.7227 | 0.7238 |
| 0.5099 | 22.03 | 2600 | 0.5743 | 0.7071 | 0.7106 |
| 0.5023 | 23.73 | 2800 | 0.5625 | 0.7118 | 0.7127 |
| 0.4974 | 25.42 | 3000 | 0.5680 | 0.7176 | 0.7180 |
| 0.4955 | 27.12 | 3200 | 0.5591 | 0.7176 | 0.7180 |
| 0.491 | 28.81 | 3400 | 0.5626 | 0.7133 | 0.7138 |
| 0.4841 | 30.51 | 3600 | 0.5655 | 0.7211 | 0.7212 |
| 0.4843 | 32.2 | 3800 | 0.5584 | 0.7158 | 0.7164 |
| 0.4809 | 33.9 | 4000 | 0.5682 | 0.7143 | 0.7148 |
| 0.4736 | 35.59 | 4200 | 0.5741 | 0.7139 | 0.7143 |
| 0.4675 | 37.29 | 4400 | 0.5745 | 0.7195 | 0.7196 |
| 0.4662 | 38.98 | 4600 | 0.5763 | 0.7148 | 0.7148 |
| 0.4611 | 40.68 | 4800 | 0.5971 | 0.7091 | 0.7111 |
| 0.4552 | 42.37 | 5000 | 0.5868 | 0.7163 | 0.7164 |
| 0.4575 | 44.07 | 5200 | 0.5880 | 0.7150 | 0.7153 |
| 0.4523 | 45.76 | 5400 | 0.5929 | 0.7136 | 0.7143 |
| 0.4448 | 47.46 | 5600 | 0.6072 | 0.7191 | 0.7191 |
| 0.4473 | 49.15 | 5800 | 0.5905 | 0.7169 | 0.7169 |
| 0.4368 | 50.85 | 6000 | 0.5992 | 0.7099 | 0.7106 |
| 0.4407 | 52.54 | 6200 | 0.6063 | 0.7170 | 0.7169 |
| 0.4375 | 54.24 | 6400 | 0.5985 | 0.7130 | 0.7138 |
| 0.4359 | 55.93 | 6600 | 0.6044 | 0.7117 | 0.7116 |
| 0.4266 | 57.63 | 6800 | 0.6082 | 0.7105 | 0.7106 |
| 0.4247 | 59.32 | 7000 | 0.6136 | 0.7142 | 0.7148 |
| 0.4311 | 61.02 | 7200 | 0.6047 | 0.7123 | 0.7127 |
| 0.4237 | 62.71 | 7400 | 0.6130 | 0.7117 | 0.7116 |
| 0.4221 | 64.41 | 7600 | 0.6120 | 0.7097 | 0.7100 |
| 0.4217 | 66.1 | 7800 | 0.6192 | 0.7104 | 0.7106 |
| 0.4162 | 67.8 | 8000 | 0.6190 | 0.7128 | 0.7127 |
| 0.4173 | 69.49 | 8200 | 0.6208 | 0.7078 | 0.7079 |
| 0.4155 | 71.19 | 8400 | 0.6187 | 0.7054 | 0.7053 |
| 0.4177 | 72.88 | 8600 | 0.6202 | 0.7032 | 0.7031 |
| 0.4113 | 74.58 | 8800 | 0.6260 | 0.7075 | 0.7074 |
| 0.4127 | 76.27 | 9000 | 0.6312 | 0.7054 | 0.7058 |
| 0.4145 | 77.97 | 9200 | 0.6249 | 0.7051 | 0.7053 |
| 0.41 | 79.66 | 9400 | 0.6259 | 0.7090 | 0.7090 |
| 0.4075 | 81.36 | 9600 | 0.6283 | 0.7063 | 0.7063 |
| 0.4097 | 83.05 | 9800 | 0.6286 | 0.7051 | 0.7053 |
| 0.409 | 84.75 | 10000 | 0.6285 | 0.7046 | 0.7047 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_32768_512_43M", "model-index": [{"name": "GUE_mouse_4-seqsight_32768_512_43M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_mouse_4-seqsight_32768_512_43M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_32768_512_43M",
"region:us"
] | null | 2024-05-03T14:22:28+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us
| GUE\_mouse\_4-seqsight\_32768\_512\_43M-L8\_f
=============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_32768\_512\_43M on the mahdibaghbanzadeh/GUE\_mouse\_4 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5862
* F1 Score: 0.6914
* Accuracy: 0.6936
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
43,
100,
5,
52
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | ilyi/whisper-large-v2-lora | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T14:23:36+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
26,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |
null | peft | ## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
| {"library_name": "peft"} | waelChafei/llama2-latest-summarization | null | [
"peft",
"region:us"
] | null | 2024-05-03T14:24:13+00:00 | [] | [] | TAGS
#peft #region-us
| ## Training procedure
The following 'bitsandbytes' quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
| [
"## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: True\n- bnb_4bit_compute_dtype: bfloat16",
"### Framework versions\n\n\n- PEFT 0.4.0"
] | [
"TAGS\n#peft #region-us \n",
"## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: True\n- bnb_4bit_compute_dtype: bfloat16",
"### Framework versions\n\n\n- PEFT 0.4.0"
] | [
8,
146,
13
] | [
"TAGS\n#peft #region-us \n## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: True\n- bnb_4bit_compute_dtype: bfloat16### Framework versions\n\n\n- PEFT 0.4.0"
] |
feature-extraction | sentence-transformers | This model is a fine-tuned version of [**jinaai/jina-embeddings-v2-base-en**](https://huggingface.co/jinaai/jina-embeddings-v2-base-en) designed for the following use case:
This model is designed to support various applications in natural language processing and understanding.
## How to Use
This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:
```python
from transformers import AutoModel, AutoTokenizer
llm_name = "jina-embeddings-v2-base-en-03052024-0swb-webapp"
tokenizer = AutoTokenizer.from_pretrained(llm_name)
model = AutoModel.from_pretrained(llm_name, trust_remote_code=True)
tokens = tokenizer("Your text here", return_tensors="pt")
embedding = model(**tokens)
```
| {"language": ["en"], "license": "apache-2.0", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "mteb", "Events", "Meetups", "Networking", "Social", "Community"], "datasets": ["fine-tuned/jina-embeddings-v2-base-en-03052024-0swb-webapp", "allenai/c4"], "pipeline_tag": "feature-extraction"} | fine-tuned/jina-embeddings-v2-base-en-03052024-0swb-webapp | null | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"mteb",
"Events",
"Meetups",
"Networking",
"Social",
"Community",
"custom_code",
"en",
"dataset:fine-tuned/jina-embeddings-v2-base-en-03052024-0swb-webapp",
"dataset:allenai/c4",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T14:24:58+00:00 | [] | [
"en"
] | TAGS
#sentence-transformers #safetensors #bert #feature-extraction #sentence-similarity #mteb #Events #Meetups #Networking #Social #Community #custom_code #en #dataset-fine-tuned/jina-embeddings-v2-base-en-03052024-0swb-webapp #dataset-allenai/c4 #license-apache-2.0 #endpoints_compatible #region-us
| This model is a fine-tuned version of jinaai/jina-embeddings-v2-base-en designed for the following use case:
This model is designed to support various applications in natural language processing and understanding.
## How to Use
This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:
| [
"## How to Use\nThis model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:"
] | [
"TAGS\n#sentence-transformers #safetensors #bert #feature-extraction #sentence-similarity #mteb #Events #Meetups #Networking #Social #Community #custom_code #en #dataset-fine-tuned/jina-embeddings-v2-base-en-03052024-0swb-webapp #dataset-allenai/c4 #license-apache-2.0 #endpoints_compatible #region-us \n",
"## How to Use\nThis model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:"
] | [
100,
43
] | [
"TAGS\n#sentence-transformers #safetensors #bert #feature-extraction #sentence-similarity #mteb #Events #Meetups #Networking #Social #Community #custom_code #en #dataset-fine-tuned/jina-embeddings-v2-base-en-03052024-0swb-webapp #dataset-allenai/c4 #license-apache-2.0 #endpoints_compatible #region-us \n## How to Use\nThis model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | ibivibiv/llama3-8b-ultrafeedback-dpo-v2 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-03T14:25:06+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
44,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |
text-classification | sklearn |
# Model description
[More Information Needed]
## Intended uses & limitations
[More Information Needed]
## Training Procedure
[More Information Needed]
### Hyperparameters
<details>
<summary> Click to expand </summary>
| Hyperparameter | Value |
|------------------------------|---------------------------------------------------------------------------------------------------------|
| memory | |
| steps | [('transformer', MultiSkillTransformer()), ('clf', SVC(C=1, class_weight='balanced', kernel='linear'))] |
| verbose | False |
| transformer | MultiSkillTransformer() |
| clf | SVC(C=1, class_weight='balanced', kernel='linear') |
| clf__C | 1 |
| clf__break_ties | False |
| clf__cache_size | 200 |
| clf__class_weight | balanced |
| clf__coef0 | 0.0 |
| clf__decision_function_shape | ovr |
| clf__degree | 3 |
| clf__gamma | scale |
| clf__kernel | linear |
| clf__max_iter | -1 |
| clf__probability | False |
| clf__random_state | |
| clf__shrinking | True |
| clf__tol | 0.001 |
| clf__verbose | False |
</details>
### Model Plot
<style>#sk-container-id-1 {/* Definition of color scheme common for light and dark mode */--sklearn-color-text: black;--sklearn-color-line: gray;/* Definition of color scheme for unfitted estimators */--sklearn-color-unfitted-level-0: #fff5e6;--sklearn-color-unfitted-level-1: #f6e4d2;--sklearn-color-unfitted-level-2: #ffe0b3;--sklearn-color-unfitted-level-3: chocolate;/* Definition of color scheme for fitted estimators */--sklearn-color-fitted-level-0: #f0f8ff;--sklearn-color-fitted-level-1: #d4ebff;--sklearn-color-fitted-level-2: #b3dbfd;--sklearn-color-fitted-level-3: cornflowerblue;/* Specific color for light theme */--sklearn-color-text-on-default-background: var(--sg-text-color, var(--theme-code-foreground, var(--jp-content-font-color1, black)));--sklearn-color-background: var(--sg-background-color, var(--theme-background, var(--jp-layout-color0, white)));--sklearn-color-border-box: var(--sg-text-color, var(--theme-code-foreground, var(--jp-content-font-color1, black)));--sklearn-color-icon: #696969;@media (prefers-color-scheme: dark) {/* Redefinition of color scheme for dark theme */--sklearn-color-text-on-default-background: var(--sg-text-color, var(--theme-code-foreground, var(--jp-content-font-color1, white)));--sklearn-color-background: var(--sg-background-color, var(--theme-background, var(--jp-layout-color0, #111)));--sklearn-color-border-box: var(--sg-text-color, var(--theme-code-foreground, var(--jp-content-font-color1, white)));--sklearn-color-icon: #878787;}
}#sk-container-id-1 {color: var(--sklearn-color-text);
}#sk-container-id-1 pre {padding: 0;
}#sk-container-id-1 input.sk-hidden--visually {border: 0;clip: rect(1px 1px 1px 1px);clip: rect(1px, 1px, 1px, 1px);height: 1px;margin: -1px;overflow: hidden;padding: 0;position: absolute;width: 1px;
}#sk-container-id-1 div.sk-dashed-wrapped {border: 1px dashed var(--sklearn-color-line);margin: 0 0.4em 0.5em 0.4em;box-sizing: border-box;padding-bottom: 0.4em;background-color: var(--sklearn-color-background);
}#sk-container-id-1 div.sk-container {/* jupyter's `normalize.less` sets `[hidden] { display: none; }`but bootstrap.min.css set `[hidden] { display: none !important; }`so we also need the `!important` here to be able to override thedefault hidden behavior on the sphinx rendered scikit-learn.org.See: https://github.com/scikit-learn/scikit-learn/issues/21755 */display: inline-block !important;position: relative;
}#sk-container-id-1 div.sk-text-repr-fallback {display: none;
}div.sk-parallel-item,
div.sk-serial,
div.sk-item {/* draw centered vertical line to link estimators */background-image: linear-gradient(var(--sklearn-color-text-on-default-background), var(--sklearn-color-text-on-default-background));background-size: 2px 100%;background-repeat: no-repeat;background-position: center center;
}/* Parallel-specific style estimator block */#sk-container-id-1 div.sk-parallel-item::after {content: "";width: 100%;border-bottom: 2px solid var(--sklearn-color-text-on-default-background);flex-grow: 1;
}#sk-container-id-1 div.sk-parallel {display: flex;align-items: stretch;justify-content: center;background-color: var(--sklearn-color-background);position: relative;
}#sk-container-id-1 div.sk-parallel-item {display: flex;flex-direction: column;
}#sk-container-id-1 div.sk-parallel-item:first-child::after {align-self: flex-end;width: 50%;
}#sk-container-id-1 div.sk-parallel-item:last-child::after {align-self: flex-start;width: 50%;
}#sk-container-id-1 div.sk-parallel-item:only-child::after {width: 0;
}/* Serial-specific style estimator block */#sk-container-id-1 div.sk-serial {display: flex;flex-direction: column;align-items: center;background-color: var(--sklearn-color-background);padding-right: 1em;padding-left: 1em;
}/* Toggleable style: style used for estimator/Pipeline/ColumnTransformer box that is
clickable and can be expanded/collapsed.
- Pipeline and ColumnTransformer use this feature and define the default style
- Estimators will overwrite some part of the style using the `sk-estimator` class
*//* Pipeline and ColumnTransformer style (default) */#sk-container-id-1 div.sk-toggleable {/* Default theme specific background. It is overwritten whether we have aspecific estimator or a Pipeline/ColumnTransformer */background-color: var(--sklearn-color-background);
}/* Toggleable label */
#sk-container-id-1 label.sk-toggleable__label {cursor: pointer;display: block;width: 100%;margin-bottom: 0;padding: 0.5em;box-sizing: border-box;text-align: center;
}#sk-container-id-1 label.sk-toggleable__label-arrow:before {/* Arrow on the left of the label */content: "â–¸";float: left;margin-right: 0.25em;color: var(--sklearn-color-icon);
}#sk-container-id-1 label.sk-toggleable__label-arrow:hover:before {color: var(--sklearn-color-text);
}/* Toggleable content - dropdown */#sk-container-id-1 div.sk-toggleable__content {max-height: 0;max-width: 0;overflow: hidden;text-align: left;/* unfitted */background-color: var(--sklearn-color-unfitted-level-0);
}#sk-container-id-1 div.sk-toggleable__content.fitted {/* fitted */background-color: var(--sklearn-color-fitted-level-0);
}#sk-container-id-1 div.sk-toggleable__content pre {margin: 0.2em;border-radius: 0.25em;color: var(--sklearn-color-text);/* unfitted */background-color: var(--sklearn-color-unfitted-level-0);
}#sk-container-id-1 div.sk-toggleable__content.fitted pre {/* unfitted */background-color: var(--sklearn-color-fitted-level-0);
}#sk-container-id-1 input.sk-toggleable__control:checked~div.sk-toggleable__content {/* Expand drop-down */max-height: 200px;max-width: 100%;overflow: auto;
}#sk-container-id-1 input.sk-toggleable__control:checked~label.sk-toggleable__label-arrow:before {content: "â–¾";
}/* Pipeline/ColumnTransformer-specific style */#sk-container-id-1 div.sk-label input.sk-toggleable__control:checked~label.sk-toggleable__label {color: var(--sklearn-color-text);background-color: var(--sklearn-color-unfitted-level-2);
}#sk-container-id-1 div.sk-label.fitted input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: var(--sklearn-color-fitted-level-2);
}/* Estimator-specific style *//* Colorize estimator box */
#sk-container-id-1 div.sk-estimator input.sk-toggleable__control:checked~label.sk-toggleable__label {/* unfitted */background-color: var(--sklearn-color-unfitted-level-2);
}#sk-container-id-1 div.sk-estimator.fitted input.sk-toggleable__control:checked~label.sk-toggleable__label {/* fitted */background-color: var(--sklearn-color-fitted-level-2);
}#sk-container-id-1 div.sk-label label.sk-toggleable__label,
#sk-container-id-1 div.sk-label label {/* The background is the default theme color */color: var(--sklearn-color-text-on-default-background);
}/* On hover, darken the color of the background */
#sk-container-id-1 div.sk-label:hover label.sk-toggleable__label {color: var(--sklearn-color-text);background-color: var(--sklearn-color-unfitted-level-2);
}/* Label box, darken color on hover, fitted */
#sk-container-id-1 div.sk-label.fitted:hover label.sk-toggleable__label.fitted {color: var(--sklearn-color-text);background-color: var(--sklearn-color-fitted-level-2);
}/* Estimator label */#sk-container-id-1 div.sk-label label {font-family: monospace;font-weight: bold;display: inline-block;line-height: 1.2em;
}#sk-container-id-1 div.sk-label-container {text-align: center;
}/* Estimator-specific */
#sk-container-id-1 div.sk-estimator {font-family: monospace;border: 1px dotted var(--sklearn-color-border-box);border-radius: 0.25em;box-sizing: border-box;margin-bottom: 0.5em;/* unfitted */background-color: var(--sklearn-color-unfitted-level-0);
}#sk-container-id-1 div.sk-estimator.fitted {/* fitted */background-color: var(--sklearn-color-fitted-level-0);
}/* on hover */
#sk-container-id-1 div.sk-estimator:hover {/* unfitted */background-color: var(--sklearn-color-unfitted-level-2);
}#sk-container-id-1 div.sk-estimator.fitted:hover {/* fitted */background-color: var(--sklearn-color-fitted-level-2);
}/* Specification for estimator info (e.g. "i" and "?") *//* Common style for "i" and "?" */.sk-estimator-doc-link,
a:link.sk-estimator-doc-link,
a:visited.sk-estimator-doc-link {float: right;font-size: smaller;line-height: 1em;font-family: monospace;background-color: var(--sklearn-color-background);border-radius: 1em;height: 1em;width: 1em;text-decoration: none !important;margin-left: 1ex;/* unfitted */border: var(--sklearn-color-unfitted-level-1) 1pt solid;color: var(--sklearn-color-unfitted-level-1);
}.sk-estimator-doc-link.fitted,
a:link.sk-estimator-doc-link.fitted,
a:visited.sk-estimator-doc-link.fitted {/* fitted */border: var(--sklearn-color-fitted-level-1) 1pt solid;color: var(--sklearn-color-fitted-level-1);
}/* On hover */
div.sk-estimator:hover .sk-estimator-doc-link:hover,
.sk-estimator-doc-link:hover,
div.sk-label-container:hover .sk-estimator-doc-link:hover,
.sk-estimator-doc-link:hover {/* unfitted */background-color: var(--sklearn-color-unfitted-level-3);color: var(--sklearn-color-background);text-decoration: none;
}div.sk-estimator.fitted:hover .sk-estimator-doc-link.fitted:hover,
.sk-estimator-doc-link.fitted:hover,
div.sk-label-container:hover .sk-estimator-doc-link.fitted:hover,
.sk-estimator-doc-link.fitted:hover {/* fitted */background-color: var(--sklearn-color-fitted-level-3);color: var(--sklearn-color-background);text-decoration: none;
}/* Span, style for the box shown on hovering the info icon */
.sk-estimator-doc-link span {display: none;z-index: 9999;position: relative;font-weight: normal;right: .2ex;padding: .5ex;margin: .5ex;width: min-content;min-width: 20ex;max-width: 50ex;color: var(--sklearn-color-text);box-shadow: 2pt 2pt 4pt #999;/* unfitted */background: var(--sklearn-color-unfitted-level-0);border: .5pt solid var(--sklearn-color-unfitted-level-3);
}.sk-estimator-doc-link.fitted span {/* fitted */background: var(--sklearn-color-fitted-level-0);border: var(--sklearn-color-fitted-level-3);
}.sk-estimator-doc-link:hover span {display: block;
}/* "?"-specific style due to the `<a>` HTML tag */#sk-container-id-1 a.estimator_doc_link {float: right;font-size: 1rem;line-height: 1em;font-family: monospace;background-color: var(--sklearn-color-background);border-radius: 1rem;height: 1rem;width: 1rem;text-decoration: none;/* unfitted */color: var(--sklearn-color-unfitted-level-1);border: var(--sklearn-color-unfitted-level-1) 1pt solid;
}#sk-container-id-1 a.estimator_doc_link.fitted {/* fitted */border: var(--sklearn-color-fitted-level-1) 1pt solid;color: var(--sklearn-color-fitted-level-1);
}/* On hover */
#sk-container-id-1 a.estimator_doc_link:hover {/* unfitted */background-color: var(--sklearn-color-unfitted-level-3);color: var(--sklearn-color-background);text-decoration: none;
}#sk-container-id-1 a.estimator_doc_link.fitted:hover {/* fitted */background-color: var(--sklearn-color-fitted-level-3);
}
</style><div id="sk-container-id-1" class="sk-top-container" style="overflow: auto;"><div class="sk-text-repr-fallback"><pre>Pipeline(steps=[('transformer', MultiSkillTransformer()),('clf', SVC(C=1, class_weight='balanced', kernel='linear'))])</pre><b>In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook. <br />On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.</b></div><div class="sk-container" hidden><div class="sk-item sk-dashed-wrapped"><div class="sk-label-container"><div class="sk-label fitted sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-1" type="checkbox" ><label for="sk-estimator-id-1" class="sk-toggleable__label fitted sk-toggleable__label-arrow fitted"> Pipeline<a class="sk-estimator-doc-link fitted" rel="noreferrer" target="_blank" href="https://scikit-learn.org/1.4/modules/generated/sklearn.pipeline.Pipeline.html">?<span>Documentation for Pipeline</span></a><span class="sk-estimator-doc-link fitted">i<span>Fitted</span></span></label><div class="sk-toggleable__content fitted"><pre>Pipeline(steps=[('transformer', MultiSkillTransformer()),('clf', SVC(C=1, class_weight='balanced', kernel='linear'))])</pre></div> </div></div><div class="sk-serial"><div class="sk-item"><div class="sk-estimator fitted sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-2" type="checkbox" ><label for="sk-estimator-id-2" class="sk-toggleable__label fitted sk-toggleable__label-arrow fitted">MultiSkillTransformer</label><div class="sk-toggleable__content fitted"><pre>MultiSkillTransformer()</pre></div> </div></div><div class="sk-item"><div class="sk-estimator fitted sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-3" type="checkbox" ><label for="sk-estimator-id-3" class="sk-toggleable__label fitted sk-toggleable__label-arrow fitted"> SVC<a class="sk-estimator-doc-link fitted" rel="noreferrer" target="_blank" href="https://scikit-learn.org/1.4/modules/generated/sklearn.svm.SVC.html">?<span>Documentation for SVC</span></a></label><div class="sk-toggleable__content fitted"><pre>SVC(C=1, class_weight='balanced', kernel='linear')</pre></div> </div></div></div></div></div></div>
## Evaluation Results
[More Information Needed]
# How to Get Started with the Model
[More Information Needed]
# Model Card Authors
This model card is written by following authors:
[More Information Needed]
# Model Card Contact
You can contact the model card authors through following channels:
[More Information Needed]
# Citation
Below you can find information related to citation.
**BibTeX:**
```
[More Information Needed]
```
# model_description
Support Vector Machine (SVM) trained to predict if a skill span is a multiskill or not.
# Classification Report
<details>
<summary> Click to expand </summary>
| index | precision | recall | f1-score | support |
|--------------|-------------|----------|------------|------------|
| SKILL | 0.871795 | 0.871795 | 0.871795 | 78 |
| MULTISKILL | 0.89899 | 0.89899 | 0.89899 | 99 |
| accuracy | 0.887006 | 0.887006 | 0.887006 | 0.887006 |
| macro avg | 0.885392 | 0.885392 | 0.885392 | 177 |
| weighted avg | 0.887006 | 0.887006 | 0.887006 | 177 |
</details>
| {"license": "mit", "library_name": "sklearn", "tags": ["sklearn", "skops", "text-classification"], "model_format": "pickle", "model_file": "multiskill-classifier8lnyq0he.pkl"} | nestauk/multiskill-classifier | null | [
"sklearn",
"skops",
"text-classification",
"license:mit",
"region:us"
] | null | 2024-05-03T14:26:13+00:00 | [] | [] | TAGS
#sklearn #skops #text-classification #license-mit #region-us
| Model description
=================
Intended uses & limitations
---------------------------
Training Procedure
------------------
### Hyperparameters
Click to expand
### Model Plot
#sk-container-id-1 {/\* Definition of color scheme common for light and dark mode \*/--sklearn-color-text: black;--sklearn-color-line: gray;/\* Definition of color scheme for unfitted estimators \*/--sklearn-color-unfitted-level-0: #fff5e6;--sklearn-color-unfitted-level-1: #f6e4d2;--sklearn-color-unfitted-level-2: #ffe0b3;--sklearn-color-unfitted-level-3: chocolate;/\* Definition of color scheme for fitted estimators \*/--sklearn-color-fitted-level-0: #f0f8ff;--sklearn-color-fitted-level-1: #d4ebff;--sklearn-color-fitted-level-2: #b3dbfd;--sklearn-color-fitted-level-3: cornflowerblue;/\* Specific color for light theme \*/--sklearn-color-text-on-default-background: var(--sg-text-color, var(--theme-code-foreground, var(--jp-content-font-color1, black)));--sklearn-color-background: var(--sg-background-color, var(--theme-background, var(--jp-layout-color0, white)));--sklearn-color-border-box: var(--sg-text-color, var(--theme-code-foreground, var(--jp-content-font-color1, black)));--sklearn-color-icon: #696969;@media (prefers-color-scheme: dark) {/\* Redefinition of color scheme for dark theme \*/--sklearn-color-text-on-default-background: var(--sg-text-color, var(--theme-code-foreground, var(--jp-content-font-color1, white)));--sklearn-color-background: var(--sg-background-color, var(--theme-background, var(--jp-layout-color0, #111)));--sklearn-color-border-box: var(--sg-text-color, var(--theme-code-foreground, var(--jp-content-font-color1, white)));--sklearn-color-icon: #878787;}
}#sk-container-id-1 {color: var(--sklearn-color-text);
}#sk-container-id-1 pre {padding: 0;
}#sk-container-id-1 URL-hidden--visually {border: 0;clip: rect(1px 1px 1px 1px);clip: rect(1px, 1px, 1px, 1px);height: 1px;margin: -1px;overflow: hidden;padding: 0;position: absolute;width: 1px;
}#sk-container-id-1 URL-dashed-wrapped {border: 1px dashed var(--sklearn-color-line);margin: 0 0.4em 0.5em 0.4em;box-sizing: border-box;padding-bottom: 0.4em;background-color: var(--sklearn-color-background);
}#sk-container-id-1 URL-container {/\* jupyter's 'URL' sets '[hidden] { display: none; }'but URL set '[hidden] { display: none !important; }'so we also need the '!important' here to be able to override thedefault hidden behavior on the sphinx rendered URL.See: URL \*/display: inline-block !important;position: relative;
}#sk-container-id-1 URL-text-repr-fallback {display: none;
}URL-parallel-item,
URL-serial,
URL-item {/\* draw centered vertical line to link estimators \*/background-image: linear-gradient(var(--sklearn-color-text-on-default-background), var(--sklearn-color-text-on-default-background));background-size: 2px 100%;background-repeat: no-repeat;background-position: center center;
}/\* Parallel-specific style estimator block \*/#sk-container-id-1 URL-parallel-item::after {content: "";width: 100%;border-bottom: 2px solid var(--sklearn-color-text-on-default-background);flex-grow: 1;
}#sk-container-id-1 URL-parallel {display: flex;align-items: stretch;justify-content: center;background-color: var(--sklearn-color-background);position: relative;
}#sk-container-id-1 URL-parallel-item {display: flex;flex-direction: column;
}#sk-container-id-1 URL-parallel-item:first-child::after {align-self: flex-end;width: 50%;
}#sk-container-id-1 URL-parallel-item:last-child::after {align-self: flex-start;width: 50%;
}#sk-container-id-1 URL-parallel-item:only-child::after {width: 0;
}/\* Serial-specific style estimator block \*/#sk-container-id-1 URL-serial {display: flex;flex-direction: column;align-items: center;background-color: var(--sklearn-color-background);padding-right: 1em;padding-left: 1em;
}/\* Toggleable style: style used for estimator/Pipeline/ColumnTransformer box that is
clickable and can be expanded/collapsed.
- Pipeline and ColumnTransformer use this feature and define the default style
- Estimators will overwrite some part of the style using the 'sk-estimator' class
\*//\* Pipeline and ColumnTransformer style (default) \*/#sk-container-id-1 URL-toggleable {/\* Default theme specific background. It is overwritten whether we have aspecific estimator or a Pipeline/ColumnTransformer \*/background-color: var(--sklearn-color-background);
}/\* Toggleable label \*/
#sk-container-id-1 URL-toggleable\_\_label {cursor: pointer;display: block;width: 100%;margin-bottom: 0;padding: 0.5em;box-sizing: border-box;text-align: center;
}#sk-container-id-1 URL-toggleable\_\_label-arrow:before {/\* Arrow on the left of the label \*/content: "â–¸";float: left;margin-right: 0.25em;color: var(--sklearn-color-icon);
}#sk-container-id-1 URL-toggleable\_\_label-arrow:hover:before {color: var(--sklearn-color-text);
}/\* Toggleable content - dropdown \*/#sk-container-id-1 URL-toggleable\_\_content {max-height: 0;max-width: 0;overflow: hidden;text-align: left;/\* unfitted \*/background-color: var(--sklearn-color-unfitted-level-0);
}#sk-container-id-1 URL-toggleable\_\_content.fitted {/\* fitted \*/background-color: var(--sklearn-color-fitted-level-0);
}#sk-container-id-1 URL-toggleable\_\_content pre {margin: 0.2em;border-radius: 0.25em;color: var(--sklearn-color-text);/\* unfitted \*/background-color: var(--sklearn-color-unfitted-level-0);
}#sk-container-id-1 URL-toggleable\_\_content.fitted pre {/\* unfitted \*/background-color: var(--sklearn-color-fitted-level-0);
}#sk-container-id-1 URL-toggleable\_\_control:checked~URL-toggleable\_\_content {/\* Expand drop-down \*/max-height: 200px;max-width: 100%;overflow: auto;
}#sk-container-id-1 URL-toggleable\_\_control:checked~URL-toggleable\_\_label-arrow:before {content: "â–¾";
}/\* Pipeline/ColumnTransformer-specific style \*/#sk-container-id-1 URL-label URL-toggleable\_\_control:checked~URL-toggleable\_\_label {color: var(--sklearn-color-text);background-color: var(--sklearn-color-unfitted-level-2);
}#sk-container-id-1 URL URL-toggleable\_\_control:checked~URL-toggleable\_\_label {background-color: var(--sklearn-color-fitted-level-2);
}/\* Estimator-specific style \*//\* Colorize estimator box \*/
#sk-container-id-1 URL-estimator URL-toggleable\_\_control:checked~URL-toggleable\_\_label {/\* unfitted \*/background-color: var(--sklearn-color-unfitted-level-2);
}#sk-container-id-1 URL URL-toggleable\_\_control:checked~URL-toggleable\_\_label {/\* fitted \*/background-color: var(--sklearn-color-fitted-level-2);
}#sk-container-id-1 URL-label URL-toggleable\_\_label,
#sk-container-id-1 URL-label label {/\* The background is the default theme color \*/color: var(--sklearn-color-text-on-default-background);
}/\* On hover, darken the color of the background \*/
#sk-container-id-1 URL-label:hover URL-toggleable\_\_label {color: var(--sklearn-color-text);background-color: var(--sklearn-color-unfitted-level-2);
}/\* Label box, darken color on hover, fitted \*/
#sk-container-id-1 URL:hover URL-toggleable\_\_label.fitted {color: var(--sklearn-color-text);background-color: var(--sklearn-color-fitted-level-2);
}/\* Estimator label \*/#sk-container-id-1 URL-label label {font-family: monospace;font-weight: bold;display: inline-block;line-height: 1.2em;
}#sk-container-id-1 URL-label-container {text-align: center;
}/\* Estimator-specific \*/
#sk-container-id-1 URL-estimator {font-family: monospace;border: 1px dotted var(--sklearn-color-border-box);border-radius: 0.25em;box-sizing: border-box;margin-bottom: 0.5em;/\* unfitted \*/background-color: var(--sklearn-color-unfitted-level-0);
}#sk-container-id-1 URL {/\* fitted \*/background-color: var(--sklearn-color-fitted-level-0);
}/\* on hover \*/
#sk-container-id-1 URL-estimator:hover {/\* unfitted \*/background-color: var(--sklearn-color-unfitted-level-2);
}#sk-container-id-1 URL:hover {/\* fitted \*/background-color: var(--sklearn-color-fitted-level-2);
}/\* Specification for estimator info (e.g. "i" and "?") \*//\* Common style for "i" and "?" \*/.sk-estimator-doc-link,
a:URL-estimator-doc-link,
a:URL-estimator-doc-link {float: right;font-size: smaller;line-height: 1em;font-family: monospace;background-color: var(--sklearn-color-background);border-radius: 1em;height: 1em;width: 1em;text-decoration: none !important;margin-left: 1ex;/\* unfitted \*/border: var(--sklearn-color-unfitted-level-1) 1pt solid;color: var(--sklearn-color-unfitted-level-1);
}.URL,
a:URL,
a:URL {/\* fitted \*/border: var(--sklearn-color-fitted-level-1) 1pt solid;color: var(--sklearn-color-fitted-level-1);
}/\* On hover \*/
URL-estimator:hover .sk-estimator-doc-link:hover,
.sk-estimator-doc-link:hover,
URL-label-container:hover .sk-estimator-doc-link:hover,
.sk-estimator-doc-link:hover {/\* unfitted \*/background-color: var(--sklearn-color-unfitted-level-3);color: var(--sklearn-color-background);text-decoration: none;
}URL:hover .URL:hover,
.URL:hover,
URL-label-container:hover .URL:hover,
.URL:hover {/\* fitted \*/background-color: var(--sklearn-color-fitted-level-3);color: var(--sklearn-color-background);text-decoration: none;
}/\* Span, style for the box shown on hovering the info icon \*/
.sk-estimator-doc-link span {display: none;z-index: 9999;position: relative;font-weight: normal;right: .2ex;padding: .5ex;margin: .5ex;width: min-content;min-width: 20ex;max-width: 50ex;color: var(--sklearn-color-text);box-shadow: 2pt 2pt 4pt #999;/\* unfitted \*/background: var(--sklearn-color-unfitted-level-0);border: .5pt solid var(--sklearn-color-unfitted-level-3);
}.URL span {/\* fitted \*/background: var(--sklearn-color-fitted-level-0);border: var(--sklearn-color-fitted-level-3);
}.sk-estimator-doc-link:hover span {display: block;
}/\* "?"-specific style due to the '<a>' HTML tag \*/#sk-container-id-1 a.estimator\_doc\_link {float: right;font-size: 1rem;line-height: 1em;font-family: monospace;background-color: var(--sklearn-color-background);border-radius: 1rem;height: 1rem;width: 1rem;text-decoration: none;/\* unfitted \*/color: var(--sklearn-color-unfitted-level-1);border: var(--sklearn-color-unfitted-level-1) 1pt solid;
}#sk-container-id-1 a.estimator\_doc\_link.fitted {/\* fitted \*/border: var(--sklearn-color-fitted-level-1) 1pt solid;color: var(--sklearn-color-fitted-level-1);
}/\* On hover \*/
#sk-container-id-1 a.estimator\_doc\_link:hover {/\* unfitted \*/background-color: var(--sklearn-color-unfitted-level-3);color: var(--sklearn-color-background);text-decoration: none;
}#sk-container-id-1 a.estimator\_doc\_link.fitted:hover {/\* fitted \*/background-color: var(--sklearn-color-fitted-level-3);
}
```
Pipeline(steps=[('transformer', MultiSkillTransformer()),('clf', SVC(C=1, class_weight='balanced', kernel='linear'))])
```
**In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with URL.**Â Â Pipeline[iFitted](URL for Pipeline</span></a><span class=)
```
Pipeline(steps=[('transformer', MultiSkillTransformer()),('clf', SVC(C=1, class_weight='balanced', kernel='linear'))])
```
MultiSkillTransformer
```
MultiSkillTransformer()
```
 SVC[```
SVC(C=1, class_weight='balanced', kernel='linear')
```](URL for SVC</span></a></label><div class=)
Evaluation Results
------------------
How to Get Started with the Model
=================================
Model Card Authors
==================
This model card is written by following authors:
Model Card Contact
==================
You can contact the model card authors through following channels:
Below you can find information related to citation.
BibTeX:
model\_description
==================
Support Vector Machine (SVM) trained to predict if a skill span is a multiskill or not.
Classification Report
=====================
Click to expand
| [
"### Hyperparameters\n\n\n\n Click to expand",
"### Model Plot"
] | [
"TAGS\n#sklearn #skops #text-classification #license-mit #region-us \n",
"### Hyperparameters\n\n\n\n Click to expand",
"### Model Plot"
] | [
20,
10,
5
] | [
"TAGS\n#sklearn #skops #text-classification #license-mit #region-us \n### Hyperparameters\n\n\n\n Click to expand### Model Plot"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Prototipo_3_EMI
This model is a fine-tuned version of [dccuchile/distilbert-base-spanish-uncased](https://huggingface.co/dccuchile/distilbert-base-spanish-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2540
- Accuracy: 0.5423
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 250
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 1.2186 | 0.1778 | 200 | 1.1556 | 0.4803 |
| 1.1345 | 0.3556 | 400 | 1.0663 | 0.525 |
| 1.102 | 0.5333 | 600 | 1.0479 | 0.5293 |
| 1.1325 | 0.7111 | 800 | 1.0483 | 0.5353 |
| 1.1211 | 0.8889 | 1000 | 1.0337 | 0.521 |
| 0.9736 | 1.0667 | 1200 | 1.0006 | 0.5503 |
| 0.9428 | 1.2444 | 1400 | 1.0214 | 0.5523 |
| 0.9095 | 1.4222 | 1600 | 1.0174 | 0.555 |
| 0.9806 | 1.6 | 1800 | 1.0155 | 0.5527 |
| 0.969 | 1.7778 | 2000 | 1.0043 | 0.5547 |
| 0.9112 | 1.9556 | 2200 | 1.0050 | 0.5537 |
| 0.7557 | 2.1333 | 2400 | 1.0496 | 0.5607 |
| 0.8212 | 2.3111 | 2600 | 1.0494 | 0.5597 |
| 0.7695 | 2.4889 | 2800 | 1.0510 | 0.5687 |
| 0.7648 | 2.6667 | 3000 | 1.0513 | 0.5603 |
| 0.8232 | 2.8444 | 3200 | 1.0316 | 0.563 |
| 0.6288 | 3.0222 | 3400 | 1.0883 | 0.5503 |
| 0.6736 | 3.2 | 3600 | 1.1232 | 0.548 |
| 0.682 | 3.3778 | 3800 | 1.1695 | 0.543 |
| 0.6682 | 3.5556 | 4000 | 1.1608 | 0.5427 |
| 0.6516 | 3.7333 | 4200 | 1.1636 | 0.545 |
| 0.6731 | 3.9111 | 4400 | 1.1694 | 0.5403 |
| 0.5388 | 4.0889 | 4600 | 1.2120 | 0.544 |
| 0.5663 | 4.2667 | 4800 | 1.2278 | 0.544 |
| 0.5579 | 4.4444 | 5000 | 1.2439 | 0.538 |
| 0.5216 | 4.6222 | 5200 | 1.2507 | 0.5427 |
| 0.4634 | 4.8 | 5400 | 1.2531 | 0.5393 |
| 0.5359 | 4.9778 | 5600 | 1.2540 | 0.5423 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "dccuchile/distilbert-base-spanish-uncased", "model-index": [{"name": "Prototipo_3_EMI", "results": []}]} | Armandodelca/Prototipo_3_EMI | null | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:dccuchile/distilbert-base-spanish-uncased",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T14:28:13+00:00 | [] | [] | TAGS
#transformers #safetensors #distilbert #text-classification #generated_from_trainer #base_model-dccuchile/distilbert-base-spanish-uncased #autotrain_compatible #endpoints_compatible #region-us
| Prototipo\_3\_EMI
=================
This model is a fine-tuned version of dccuchile/distilbert-base-spanish-uncased on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 1.2540
* Accuracy: 0.5423
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 3e-05
* train\_batch\_size: 24
* eval\_batch\_size: 24
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 250
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.40.1
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 24\n* eval\\_batch\\_size: 24\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 250\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #safetensors #distilbert #text-classification #generated_from_trainer #base_model-dccuchile/distilbert-base-spanish-uncased #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 24\n* eval\\_batch\\_size: 24\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 250\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
55,
117,
5,
44
] | [
"TAGS\n#transformers #safetensors #distilbert #text-classification #generated_from_trainer #base_model-dccuchile/distilbert-base-spanish-uncased #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 24\n* eval\\_batch\\_size: 24\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 250\n* num\\_epochs: 5### Training results### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | golf2248/38v8mr7 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-03T14:28:35+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
47,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | OwOpeepeepoopoo/herewegoagain17 | null | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T14:30:29+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
41,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |
text-generation | transformers | # nbeerbower/slerp-bophades-truthy-math-mistral-7B AWQ
- Model creator: [nbeerbower](https://huggingface.co/nbeerbower)
- Original model: [slerp-bophades-truthy-math-mistral-7B](https://huggingface.co/nbeerbower/slerp-bophades-truthy-math-mistral-7B)
## How to use
### Install the necessary packages
```bash
pip install --upgrade autoawq autoawq-kernels
```
### Example Python code
```python
from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer, TextStreamer
model_path = "solidrust/slerp-bophades-truthy-math-mistral-7B-AWQ"
system_message = "You are slerp-bophades-truthy-math-mistral-7B, incarnated as a powerful AI. You were created by nbeerbower."
# Load model
model = AutoAWQForCausalLM.from_quantized(model_path,
fuse_layers=True)
tokenizer = AutoTokenizer.from_pretrained(model_path,
trust_remote_code=True)
streamer = TextStreamer(tokenizer,
skip_prompt=True,
skip_special_tokens=True)
# Convert prompt to tokens
prompt_template = """\
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant"""
prompt = "You're standing on the surface of the Earth. "\
"You walk one mile south, one mile west and one mile north. "\
"You end up exactly where you started. Where are you?"
tokens = tokenizer(prompt_template.format(system_message=system_message,prompt=prompt),
return_tensors='pt').input_ids.cuda()
# Generate output
generation_output = model.generate(tokens,
streamer=streamer,
max_new_tokens=512)
```
### About AWQ
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.
It is supported by:
- [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ
- [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types.
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
| {"library_name": "transformers", "tags": ["4-bit", "AWQ", "text-generation", "autotrain_compatible", "endpoints_compatible"], "pipeline_tag": "text-generation", "inference": false, "quantized_by": "Suparious"} | solidrust/slerp-bophades-truthy-math-mistral-7B-AWQ | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"4-bit",
"AWQ",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-03T14:30:34+00:00 | [] | [] | TAGS
#transformers #safetensors #mistral #text-generation #4-bit #AWQ #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| # nbeerbower/slerp-bophades-truthy-math-mistral-7B AWQ
- Model creator: nbeerbower
- Original model: slerp-bophades-truthy-math-mistral-7B
## How to use
### Install the necessary packages
### Example Python code
### About AWQ
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.
It is supported by:
- Text Generation Webui - using Loader: AutoAWQ
- vLLM - version 0.2.2 or later for support for all model types.
- Hugging Face Text Generation Inference (TGI)
- Transformers version 4.35.0 and later, from any code or client that supports Transformers
- AutoAWQ - for use from Python code
| [
"# nbeerbower/slerp-bophades-truthy-math-mistral-7B AWQ\n\n- Model creator: nbeerbower\n- Original model: slerp-bophades-truthy-math-mistral-7B",
"## How to use",
"### Install the necessary packages",
"### Example Python code",
"### About AWQ\n\nAWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.\n\nAWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.\n\nIt is supported by:\n\n- Text Generation Webui - using Loader: AutoAWQ\n- vLLM - version 0.2.2 or later for support for all model types.\n- Hugging Face Text Generation Inference (TGI)\n- Transformers version 4.35.0 and later, from any code or client that supports Transformers\n- AutoAWQ - for use from Python code"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #4-bit #AWQ #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# nbeerbower/slerp-bophades-truthy-math-mistral-7B AWQ\n\n- Model creator: nbeerbower\n- Original model: slerp-bophades-truthy-math-mistral-7B",
"## How to use",
"### Install the necessary packages",
"### Example Python code",
"### About AWQ\n\nAWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.\n\nAWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.\n\nIt is supported by:\n\n- Text Generation Webui - using Loader: AutoAWQ\n- vLLM - version 0.2.2 or later for support for all model types.\n- Hugging Face Text Generation Inference (TGI)\n- Transformers version 4.35.0 and later, from any code or client that supports Transformers\n- AutoAWQ - for use from Python code"
] | [
41,
56,
5,
7,
6,
172
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #4-bit #AWQ #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# nbeerbower/slerp-bophades-truthy-math-mistral-7B AWQ\n\n- Model creator: nbeerbower\n- Original model: slerp-bophades-truthy-math-mistral-7B## How to use### Install the necessary packages### Example Python code### About AWQ\n\nAWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.\n\nAWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.\n\nIt is supported by:\n\n- Text Generation Webui - using Loader: AutoAWQ\n- vLLM - version 0.2.2 or later for support for all model types.\n- Hugging Face Text Generation Inference (TGI)\n- Transformers version 4.35.0 and later, from any code or client that supports Transformers\n- AutoAWQ - for use from Python code"
] |
text-generation | transformers | # flammenai/flammen15X-mistral-7B AWQ
- Model creator: [flammenai](https://huggingface.co/flammenai)
- Original model: [flammen15X-mistral-7B](https://huggingface.co/flammenai/flammen15X-mistral-7B)
## How to use
### Install the necessary packages
```bash
pip install --upgrade autoawq autoawq-kernels
```
### Example Python code
```python
from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer, TextStreamer
model_path = "solidrust/flammen15X-mistral-7B-AWQ"
system_message = "You are flammen15X-mistral-7B, incarnated as a powerful AI. You were created by flammenai."
# Load model
model = AutoAWQForCausalLM.from_quantized(model_path,
fuse_layers=True)
tokenizer = AutoTokenizer.from_pretrained(model_path,
trust_remote_code=True)
streamer = TextStreamer(tokenizer,
skip_prompt=True,
skip_special_tokens=True)
# Convert prompt to tokens
prompt_template = """\
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant"""
prompt = "You're standing on the surface of the Earth. "\
"You walk one mile south, one mile west and one mile north. "\
"You end up exactly where you started. Where are you?"
tokens = tokenizer(prompt_template.format(system_message=system_message,prompt=prompt),
return_tensors='pt').input_ids.cuda()
# Generate output
generation_output = model.generate(tokens,
streamer=streamer,
max_new_tokens=512)
```
### About AWQ
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.
It is supported by:
- [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ
- [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types.
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
| {"library_name": "transformers", "tags": ["4-bit", "AWQ", "text-generation", "autotrain_compatible", "endpoints_compatible"], "pipeline_tag": "text-generation", "inference": false, "quantized_by": "Suparious"} | solidrust/flammen15X-mistral-7B-AWQ | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"4-bit",
"AWQ",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-03T14:31:52+00:00 | [] | [] | TAGS
#transformers #safetensors #mistral #text-generation #4-bit #AWQ #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| # flammenai/flammen15X-mistral-7B AWQ
- Model creator: flammenai
- Original model: flammen15X-mistral-7B
## How to use
### Install the necessary packages
### Example Python code
### About AWQ
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.
It is supported by:
- Text Generation Webui - using Loader: AutoAWQ
- vLLM - version 0.2.2 or later for support for all model types.
- Hugging Face Text Generation Inference (TGI)
- Transformers version 4.35.0 and later, from any code or client that supports Transformers
- AutoAWQ - for use from Python code
| [
"# flammenai/flammen15X-mistral-7B AWQ\n\n- Model creator: flammenai\n- Original model: flammen15X-mistral-7B",
"## How to use",
"### Install the necessary packages",
"### Example Python code",
"### About AWQ\n\nAWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.\n\nAWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.\n\nIt is supported by:\n\n- Text Generation Webui - using Loader: AutoAWQ\n- vLLM - version 0.2.2 or later for support for all model types.\n- Hugging Face Text Generation Inference (TGI)\n- Transformers version 4.35.0 and later, from any code or client that supports Transformers\n- AutoAWQ - for use from Python code"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #4-bit #AWQ #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# flammenai/flammen15X-mistral-7B AWQ\n\n- Model creator: flammenai\n- Original model: flammen15X-mistral-7B",
"## How to use",
"### Install the necessary packages",
"### Example Python code",
"### About AWQ\n\nAWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.\n\nAWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.\n\nIt is supported by:\n\n- Text Generation Webui - using Loader: AutoAWQ\n- vLLM - version 0.2.2 or later for support for all model types.\n- Hugging Face Text Generation Inference (TGI)\n- Transformers version 4.35.0 and later, from any code or client that supports Transformers\n- AutoAWQ - for use from Python code"
] | [
41,
42,
5,
7,
6,
172
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #4-bit #AWQ #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# flammenai/flammen15X-mistral-7B AWQ\n\n- Model creator: flammenai\n- Original model: flammen15X-mistral-7B## How to use### Install the necessary packages### Example Python code### About AWQ\n\nAWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.\n\nAWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.\n\nIt is supported by:\n\n- Text Generation Webui - using Loader: AutoAWQ\n- vLLM - version 0.2.2 or later for support for all model types.\n- Hugging Face Text Generation Inference (TGI)\n- Transformers version 4.35.0 and later, from any code or client that supports Transformers\n- AutoAWQ - for use from Python code"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_mouse_3-seqsight_32768_512_43M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_32768_512_43M](https://huggingface.co/mahdibaghbanzadeh/seqsight_32768_512_43M) on the [mahdibaghbanzadeh/GUE_mouse_3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_3) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7623
- F1 Score: 0.8115
- Accuracy: 0.8117
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.555 | 13.33 | 200 | 0.4424 | 0.7699 | 0.7699 |
| 0.3629 | 26.67 | 400 | 0.3846 | 0.8284 | 0.8285 |
| 0.2668 | 40.0 | 600 | 0.4172 | 0.8159 | 0.8159 |
| 0.2103 | 53.33 | 800 | 0.4708 | 0.8282 | 0.8285 |
| 0.1661 | 66.67 | 1000 | 0.5590 | 0.8117 | 0.8117 |
| 0.133 | 80.0 | 1200 | 0.5868 | 0.8449 | 0.8452 |
| 0.1091 | 93.33 | 1400 | 0.6657 | 0.7988 | 0.7992 |
| 0.0976 | 106.67 | 1600 | 0.6974 | 0.8116 | 0.8117 |
| 0.0782 | 120.0 | 1800 | 0.7658 | 0.7866 | 0.7866 |
| 0.0674 | 133.33 | 2000 | 0.7918 | 0.7991 | 0.7992 |
| 0.058 | 146.67 | 2200 | 0.7780 | 0.8199 | 0.8201 |
| 0.0569 | 160.0 | 2400 | 0.8391 | 0.8117 | 0.8117 |
| 0.0455 | 173.33 | 2600 | 0.9611 | 0.8158 | 0.8159 |
| 0.0415 | 186.67 | 2800 | 0.9182 | 0.8158 | 0.8159 |
| 0.0414 | 200.0 | 3000 | 0.9038 | 0.8234 | 0.8243 |
| 0.0363 | 213.33 | 3200 | 0.9562 | 0.8200 | 0.8201 |
| 0.0379 | 226.67 | 3400 | 0.9500 | 0.8115 | 0.8117 |
| 0.0295 | 240.0 | 3600 | 0.9630 | 0.8074 | 0.8075 |
| 0.0272 | 253.33 | 3800 | 0.9748 | 0.8033 | 0.8033 |
| 0.0274 | 266.67 | 4000 | 0.9672 | 0.8159 | 0.8159 |
| 0.0255 | 280.0 | 4200 | 0.9223 | 0.8367 | 0.8368 |
| 0.0221 | 293.33 | 4400 | 1.0377 | 0.8158 | 0.8159 |
| 0.0219 | 306.67 | 4600 | 0.9940 | 0.8241 | 0.8243 |
| 0.0213 | 320.0 | 4800 | 0.9899 | 0.8242 | 0.8243 |
| 0.0227 | 333.33 | 5000 | 0.9454 | 0.8242 | 0.8243 |
| 0.018 | 346.67 | 5200 | 1.0548 | 0.8159 | 0.8159 |
| 0.0196 | 360.0 | 5400 | 1.0513 | 0.8157 | 0.8159 |
| 0.0177 | 373.33 | 5600 | 1.0282 | 0.8243 | 0.8243 |
| 0.0162 | 386.67 | 5800 | 1.1252 | 0.8283 | 0.8285 |
| 0.0135 | 400.0 | 6000 | 1.1668 | 0.8201 | 0.8201 |
| 0.0161 | 413.33 | 6200 | 1.1143 | 0.8283 | 0.8285 |
| 0.0148 | 426.67 | 6400 | 1.1679 | 0.8242 | 0.8243 |
| 0.0152 | 440.0 | 6600 | 1.1737 | 0.8239 | 0.8243 |
| 0.0137 | 453.33 | 6800 | 1.1314 | 0.8240 | 0.8243 |
| 0.0109 | 466.67 | 7000 | 1.1744 | 0.8200 | 0.8201 |
| 0.0143 | 480.0 | 7200 | 1.1200 | 0.8449 | 0.8452 |
| 0.0105 | 493.33 | 7400 | 1.1679 | 0.8284 | 0.8285 |
| 0.0118 | 506.67 | 7600 | 1.1535 | 0.8284 | 0.8285 |
| 0.011 | 520.0 | 7800 | 1.1421 | 0.8284 | 0.8285 |
| 0.0114 | 533.33 | 8000 | 1.1654 | 0.8242 | 0.8243 |
| 0.0104 | 546.67 | 8200 | 1.2144 | 0.8201 | 0.8201 |
| 0.0086 | 560.0 | 8400 | 1.2283 | 0.8243 | 0.8243 |
| 0.0096 | 573.33 | 8600 | 1.2220 | 0.8326 | 0.8326 |
| 0.0094 | 586.67 | 8800 | 1.1976 | 0.8243 | 0.8243 |
| 0.0102 | 600.0 | 9000 | 1.1849 | 0.8284 | 0.8285 |
| 0.0091 | 613.33 | 9200 | 1.1721 | 0.8201 | 0.8201 |
| 0.0085 | 626.67 | 9400 | 1.2208 | 0.8201 | 0.8201 |
| 0.0085 | 640.0 | 9600 | 1.2105 | 0.8243 | 0.8243 |
| 0.0092 | 653.33 | 9800 | 1.2049 | 0.8243 | 0.8243 |
| 0.0084 | 666.67 | 10000 | 1.2050 | 0.8201 | 0.8201 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_32768_512_43M", "model-index": [{"name": "GUE_mouse_3-seqsight_32768_512_43M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_mouse_3-seqsight_32768_512_43M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_32768_512_43M",
"region:us"
] | null | 2024-05-03T14:31:56+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us
| GUE\_mouse\_3-seqsight\_32768\_512\_43M-L8\_f
=============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_32768\_512\_43M on the mahdibaghbanzadeh/GUE\_mouse\_3 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.7623
* F1 Score: 0.8115
* Accuracy: 0.8117
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
43,
100,
5,
52
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_mouse_3-seqsight_32768_512_43M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_32768_512_43M](https://huggingface.co/mahdibaghbanzadeh/seqsight_32768_512_43M) on the [mahdibaghbanzadeh/GUE_mouse_3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_3) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7451
- F1 Score: 0.8032
- Accuracy: 0.8033
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.6013 | 13.33 | 200 | 0.5247 | 0.7186 | 0.7197 |
| 0.4902 | 26.67 | 400 | 0.4775 | 0.7612 | 0.7615 |
| 0.4277 | 40.0 | 600 | 0.4410 | 0.7858 | 0.7866 |
| 0.3736 | 53.33 | 800 | 0.3964 | 0.8325 | 0.8326 |
| 0.3278 | 66.67 | 1000 | 0.3932 | 0.8283 | 0.8285 |
| 0.2964 | 80.0 | 1200 | 0.3917 | 0.8326 | 0.8326 |
| 0.2727 | 93.33 | 1400 | 0.3935 | 0.8325 | 0.8326 |
| 0.2528 | 106.67 | 1600 | 0.4000 | 0.8242 | 0.8243 |
| 0.2295 | 120.0 | 1800 | 0.4109 | 0.8325 | 0.8326 |
| 0.2162 | 133.33 | 2000 | 0.4344 | 0.8243 | 0.8243 |
| 0.2028 | 146.67 | 2200 | 0.4479 | 0.8243 | 0.8243 |
| 0.1902 | 160.0 | 2400 | 0.4792 | 0.8158 | 0.8159 |
| 0.1799 | 173.33 | 2600 | 0.5062 | 0.8113 | 0.8117 |
| 0.1687 | 186.67 | 2800 | 0.4971 | 0.8326 | 0.8326 |
| 0.1651 | 200.0 | 3000 | 0.5520 | 0.8152 | 0.8159 |
| 0.1582 | 213.33 | 3200 | 0.5134 | 0.8200 | 0.8201 |
| 0.1462 | 226.67 | 3400 | 0.5592 | 0.8325 | 0.8326 |
| 0.1394 | 240.0 | 3600 | 0.5829 | 0.8200 | 0.8201 |
| 0.1332 | 253.33 | 3800 | 0.5996 | 0.8070 | 0.8075 |
| 0.131 | 266.67 | 4000 | 0.5894 | 0.8200 | 0.8201 |
| 0.1216 | 280.0 | 4200 | 0.6010 | 0.8199 | 0.8201 |
| 0.1182 | 293.33 | 4400 | 0.6116 | 0.8199 | 0.8201 |
| 0.1167 | 306.67 | 4600 | 0.6240 | 0.8368 | 0.8368 |
| 0.1116 | 320.0 | 4800 | 0.6361 | 0.8199 | 0.8201 |
| 0.1171 | 333.33 | 5000 | 0.6405 | 0.8072 | 0.8075 |
| 0.105 | 346.67 | 5200 | 0.6458 | 0.8326 | 0.8326 |
| 0.1044 | 360.0 | 5400 | 0.6778 | 0.8072 | 0.8075 |
| 0.1013 | 373.33 | 5600 | 0.6605 | 0.8242 | 0.8243 |
| 0.0976 | 386.67 | 5800 | 0.6878 | 0.8242 | 0.8243 |
| 0.0928 | 400.0 | 6000 | 0.7017 | 0.8368 | 0.8368 |
| 0.0991 | 413.33 | 6200 | 0.6914 | 0.8199 | 0.8201 |
| 0.0961 | 426.67 | 6400 | 0.7004 | 0.8241 | 0.8243 |
| 0.0989 | 440.0 | 6600 | 0.6938 | 0.8197 | 0.8201 |
| 0.0901 | 453.33 | 6800 | 0.7306 | 0.8198 | 0.8201 |
| 0.0907 | 466.67 | 7000 | 0.7197 | 0.8157 | 0.8159 |
| 0.0872 | 480.0 | 7200 | 0.7188 | 0.8158 | 0.8159 |
| 0.0857 | 493.33 | 7400 | 0.7279 | 0.8198 | 0.8201 |
| 0.0829 | 506.67 | 7600 | 0.7321 | 0.8155 | 0.8159 |
| 0.0832 | 520.0 | 7800 | 0.7509 | 0.8158 | 0.8159 |
| 0.0836 | 533.33 | 8000 | 0.7534 | 0.8114 | 0.8117 |
| 0.0788 | 546.67 | 8200 | 0.7651 | 0.8033 | 0.8033 |
| 0.0816 | 560.0 | 8400 | 0.7707 | 0.8158 | 0.8159 |
| 0.0776 | 573.33 | 8600 | 0.7720 | 0.8199 | 0.8201 |
| 0.0792 | 586.67 | 8800 | 0.7674 | 0.8116 | 0.8117 |
| 0.0791 | 600.0 | 9000 | 0.7599 | 0.8115 | 0.8117 |
| 0.081 | 613.33 | 9200 | 0.7487 | 0.8115 | 0.8117 |
| 0.076 | 626.67 | 9400 | 0.7571 | 0.8115 | 0.8117 |
| 0.0739 | 640.0 | 9600 | 0.7649 | 0.8115 | 0.8117 |
| 0.0763 | 653.33 | 9800 | 0.7626 | 0.8200 | 0.8201 |
| 0.0776 | 666.67 | 10000 | 0.7622 | 0.8200 | 0.8201 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_32768_512_43M", "model-index": [{"name": "GUE_mouse_3-seqsight_32768_512_43M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_mouse_3-seqsight_32768_512_43M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_32768_512_43M",
"region:us"
] | null | 2024-05-03T14:31:56+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us
| GUE\_mouse\_3-seqsight\_32768\_512\_43M-L1\_f
=============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_32768\_512\_43M on the mahdibaghbanzadeh/GUE\_mouse\_3 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.7451
* F1 Score: 0.8032
* Accuracy: 0.8033
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
43,
100,
5,
52
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text-classification | transformers | This model is a finetuned version of ```FacebookAI/xlm-roberta-base``` model in the **Bengali** and **Hindi** languages.
The dataset used is a Kaggle Dataset - [Modified-hate-speech-bengali-hindi](https://www.kaggle.com/datasets/abirmondal/modified-hate-speech-bengali-hindi)
This model can classify Bengali and Hindi texts into the following 5 classes:
- defamation
- hate
- non-hate
- violence
- vulgar | {"language": ["bn", "hi"], "license": "apache-2.0"} | kingshukroy/xlm-roberta-base-hate-speech-ben-hin | null | [
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"bn",
"hi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T14:31:56+00:00 | [] | [
"bn",
"hi"
] | TAGS
#transformers #safetensors #xlm-roberta #text-classification #bn #hi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| This model is a finetuned version of model in the Bengali and Hindi languages.
The dataset used is a Kaggle Dataset - Modified-hate-speech-bengali-hindi
This model can classify Bengali and Hindi texts into the following 5 classes:
- defamation
- hate
- non-hate
- violence
- vulgar | [] | [
"TAGS\n#transformers #safetensors #xlm-roberta #text-classification #bn #hi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n"
] | [
42
] | [
"TAGS\n#transformers #safetensors #xlm-roberta #text-classification #bn #hi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n"
] |
null | null | # This repository is for 2110446 Data Science and Data Engineering project.
# Project Structure
### DataGathering
This folder is mainly for collecting data from other sources.
#### 1. GoogleGeocoding
Gather geolocation of affiliation name using google geocoding API.
| Directory/File | Description |
| -------------- | --------------------------------------------------------------------- |
| geocode.csv | contains geolocation of each affiliation (aff_id, aff_name, lat, lon) |
#### 2. ScopusAPI
Query search scopus data using scopus API
| Directory/File | Description |
| --------------------- | --------------------------------------------------------- |
| example_api_data.json | contains example data fetched from search API from scopus |
| {} | when-my-cat-learn-datasci/datasci-final-project-2024 | null | [
"region:us"
] | null | 2024-05-03T14:32:03+00:00 | [] | [] | TAGS
#region-us
| This repository is for 2110446 Data Science and Data Engineering project.
=========================================================================
Project Structure
=================
### DataGathering
This folder is mainly for collecting data from other sources.
#### 1. GoogleGeocoding
Gather geolocation of affiliation name using google geocoding API.
#### 2. ScopusAPI
Query search scopus data using scopus API
| [
"### DataGathering\n\n\nThis folder is mainly for collecting data from other sources.",
"#### 1. GoogleGeocoding\n\n\nGather geolocation of affiliation name using google geocoding API.",
"#### 2. ScopusAPI\n\n\nQuery search scopus data using scopus API"
] | [
"TAGS\n#region-us \n",
"### DataGathering\n\n\nThis folder is mainly for collecting data from other sources.",
"#### 1. GoogleGeocoding\n\n\nGather geolocation of affiliation name using google geocoding API.",
"#### 2. ScopusAPI\n\n\nQuery search scopus data using scopus API"
] | [
5,
17,
24,
19
] | [
"TAGS\n#region-us \n### DataGathering\n\n\nThis folder is mainly for collecting data from other sources.#### 1. GoogleGeocoding\n\n\nGather geolocation of affiliation name using google geocoding API.#### 2. ScopusAPI\n\n\nQuery search scopus data using scopus API"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_mouse_2-seqsight_32768_512_43M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_32768_512_43M](https://huggingface.co/mahdibaghbanzadeh/seqsight_32768_512_43M) on the [mahdibaghbanzadeh/GUE_mouse_2](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_2) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3732
- F1 Score: 0.8658
- Accuracy: 0.8659
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.454 | 9.52 | 200 | 0.3612 | 0.8322 | 0.8323 |
| 0.3199 | 19.05 | 400 | 0.3407 | 0.8383 | 0.8384 |
| 0.2886 | 28.57 | 600 | 0.3299 | 0.8414 | 0.8415 |
| 0.2693 | 38.1 | 800 | 0.3207 | 0.8476 | 0.8476 |
| 0.2506 | 47.62 | 1000 | 0.3171 | 0.8414 | 0.8415 |
| 0.2324 | 57.14 | 1200 | 0.3145 | 0.8627 | 0.8628 |
| 0.2145 | 66.67 | 1400 | 0.3219 | 0.8719 | 0.8720 |
| 0.203 | 76.19 | 1600 | 0.3365 | 0.8626 | 0.8628 |
| 0.1876 | 85.71 | 1800 | 0.3527 | 0.8624 | 0.8628 |
| 0.1755 | 95.24 | 2000 | 0.3300 | 0.8719 | 0.8720 |
| 0.1659 | 104.76 | 2200 | 0.3487 | 0.8750 | 0.875 |
| 0.1492 | 114.29 | 2400 | 0.3723 | 0.8749 | 0.875 |
| 0.1454 | 123.81 | 2600 | 0.3813 | 0.8780 | 0.8780 |
| 0.1383 | 133.33 | 2800 | 0.3897 | 0.8780 | 0.8780 |
| 0.128 | 142.86 | 3000 | 0.4353 | 0.8748 | 0.875 |
| 0.1228 | 152.38 | 3200 | 0.4500 | 0.8687 | 0.8689 |
| 0.1203 | 161.9 | 3400 | 0.4626 | 0.8716 | 0.8720 |
| 0.1174 | 171.43 | 3600 | 0.4549 | 0.8747 | 0.875 |
| 0.1129 | 180.95 | 3800 | 0.4300 | 0.8688 | 0.8689 |
| 0.1106 | 190.48 | 4000 | 0.4420 | 0.8780 | 0.8780 |
| 0.102 | 200.0 | 4200 | 0.4784 | 0.8656 | 0.8659 |
| 0.0992 | 209.52 | 4400 | 0.5022 | 0.8655 | 0.8659 |
| 0.0932 | 219.05 | 4600 | 0.4891 | 0.8688 | 0.8689 |
| 0.0941 | 228.57 | 4800 | 0.4837 | 0.8718 | 0.8720 |
| 0.0893 | 238.1 | 5000 | 0.5372 | 0.8625 | 0.8628 |
| 0.0894 | 247.62 | 5200 | 0.5028 | 0.8687 | 0.8689 |
| 0.0866 | 257.14 | 5400 | 0.5503 | 0.8686 | 0.8689 |
| 0.0843 | 266.67 | 5600 | 0.5312 | 0.8626 | 0.8628 |
| 0.0811 | 276.19 | 5800 | 0.5398 | 0.8656 | 0.8659 |
| 0.0804 | 285.71 | 6000 | 0.5454 | 0.8687 | 0.8689 |
| 0.0777 | 295.24 | 6200 | 0.5398 | 0.8656 | 0.8659 |
| 0.076 | 304.76 | 6400 | 0.5483 | 0.8656 | 0.8659 |
| 0.0761 | 314.29 | 6600 | 0.5600 | 0.8687 | 0.8689 |
| 0.0745 | 323.81 | 6800 | 0.5477 | 0.8718 | 0.8720 |
| 0.073 | 333.33 | 7000 | 0.5647 | 0.8656 | 0.8659 |
| 0.072 | 342.86 | 7200 | 0.5622 | 0.8656 | 0.8659 |
| 0.0699 | 352.38 | 7400 | 0.5793 | 0.8687 | 0.8689 |
| 0.0699 | 361.9 | 7600 | 0.5593 | 0.8718 | 0.8720 |
| 0.0718 | 371.43 | 7800 | 0.5890 | 0.8687 | 0.8689 |
| 0.0662 | 380.95 | 8000 | 0.5791 | 0.8687 | 0.8689 |
| 0.0688 | 390.48 | 8200 | 0.5699 | 0.8656 | 0.8659 |
| 0.0675 | 400.0 | 8400 | 0.5741 | 0.8687 | 0.8689 |
| 0.068 | 409.52 | 8600 | 0.5728 | 0.8687 | 0.8689 |
| 0.065 | 419.05 | 8800 | 0.5704 | 0.8687 | 0.8689 |
| 0.065 | 428.57 | 9000 | 0.5923 | 0.8687 | 0.8689 |
| 0.0645 | 438.1 | 9200 | 0.5827 | 0.8687 | 0.8689 |
| 0.0647 | 447.62 | 9400 | 0.5951 | 0.8687 | 0.8689 |
| 0.0614 | 457.14 | 9600 | 0.5868 | 0.8687 | 0.8689 |
| 0.066 | 466.67 | 9800 | 0.5821 | 0.8687 | 0.8689 |
| 0.0651 | 476.19 | 10000 | 0.5859 | 0.8687 | 0.8689 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_32768_512_43M", "model-index": [{"name": "GUE_mouse_2-seqsight_32768_512_43M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_mouse_2-seqsight_32768_512_43M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_32768_512_43M",
"region:us"
] | null | 2024-05-03T14:32:04+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us
| GUE\_mouse\_2-seqsight\_32768\_512\_43M-L1\_f
=============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_32768\_512\_43M on the mahdibaghbanzadeh/GUE\_mouse\_2 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3732
* F1 Score: 0.8658
* Accuracy: 0.8659
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
43,
100,
5,
52
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_mouse_2-seqsight_32768_512_43M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_32768_512_43M](https://huggingface.co/mahdibaghbanzadeh/seqsight_32768_512_43M) on the [mahdibaghbanzadeh/GUE_mouse_2](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_2) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9727
- F1 Score: 0.8841
- Accuracy: 0.8841
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.3985 | 9.52 | 200 | 0.3315 | 0.8533 | 0.8537 |
| 0.2679 | 19.05 | 400 | 0.3289 | 0.8472 | 0.8476 |
| 0.2203 | 28.57 | 600 | 0.3222 | 0.8689 | 0.8689 |
| 0.1727 | 38.1 | 800 | 0.3561 | 0.8810 | 0.8811 |
| 0.1381 | 47.62 | 1000 | 0.4427 | 0.8871 | 0.8872 |
| 0.1136 | 57.14 | 1200 | 0.5343 | 0.8683 | 0.8689 |
| 0.0914 | 66.67 | 1400 | 0.5551 | 0.8841 | 0.8841 |
| 0.0812 | 76.19 | 1600 | 0.5796 | 0.8685 | 0.8689 |
| 0.0656 | 85.71 | 1800 | 0.5796 | 0.8811 | 0.8811 |
| 0.0589 | 95.24 | 2000 | 0.6605 | 0.8748 | 0.875 |
| 0.0542 | 104.76 | 2200 | 0.7135 | 0.8716 | 0.8720 |
| 0.0452 | 114.29 | 2400 | 0.6585 | 0.8841 | 0.8841 |
| 0.0402 | 123.81 | 2600 | 0.7272 | 0.8809 | 0.8811 |
| 0.0373 | 133.33 | 2800 | 0.7292 | 0.8748 | 0.875 |
| 0.0307 | 142.86 | 3000 | 0.6910 | 0.8749 | 0.875 |
| 0.0302 | 152.38 | 3200 | 0.7471 | 0.8810 | 0.8811 |
| 0.0279 | 161.9 | 3400 | 0.7656 | 0.8687 | 0.8689 |
| 0.0257 | 171.43 | 3600 | 0.7229 | 0.8810 | 0.8811 |
| 0.0231 | 180.95 | 3800 | 0.6869 | 0.8749 | 0.875 |
| 0.022 | 190.48 | 4000 | 0.7588 | 0.8872 | 0.8872 |
| 0.0197 | 200.0 | 4200 | 0.8807 | 0.8778 | 0.8780 |
| 0.0184 | 209.52 | 4400 | 0.8446 | 0.8748 | 0.875 |
| 0.0175 | 219.05 | 4600 | 0.7668 | 0.8780 | 0.8780 |
| 0.0184 | 228.57 | 4800 | 0.7068 | 0.8811 | 0.8811 |
| 0.0169 | 238.1 | 5000 | 0.7346 | 0.8841 | 0.8841 |
| 0.0175 | 247.62 | 5200 | 0.7376 | 0.8811 | 0.8811 |
| 0.0134 | 257.14 | 5400 | 0.7959 | 0.8810 | 0.8811 |
| 0.0116 | 266.67 | 5600 | 0.8400 | 0.8809 | 0.8811 |
| 0.0152 | 276.19 | 5800 | 0.8025 | 0.8780 | 0.8780 |
| 0.0142 | 285.71 | 6000 | 0.7747 | 0.8749 | 0.875 |
| 0.0119 | 295.24 | 6200 | 0.7905 | 0.8750 | 0.875 |
| 0.0118 | 304.76 | 6400 | 0.8220 | 0.8750 | 0.875 |
| 0.0111 | 314.29 | 6600 | 0.8242 | 0.8719 | 0.8720 |
| 0.011 | 323.81 | 6800 | 0.7764 | 0.8811 | 0.8811 |
| 0.0092 | 333.33 | 7000 | 0.8180 | 0.8719 | 0.8720 |
| 0.0114 | 342.86 | 7200 | 0.7806 | 0.8811 | 0.8811 |
| 0.01 | 352.38 | 7400 | 0.8355 | 0.8809 | 0.8811 |
| 0.0085 | 361.9 | 7600 | 0.8427 | 0.8840 | 0.8841 |
| 0.0095 | 371.43 | 7800 | 0.8382 | 0.8841 | 0.8841 |
| 0.0075 | 380.95 | 8000 | 0.8342 | 0.8871 | 0.8872 |
| 0.0072 | 390.48 | 8200 | 0.8775 | 0.8902 | 0.8902 |
| 0.0093 | 400.0 | 8400 | 0.8227 | 0.8841 | 0.8841 |
| 0.0075 | 409.52 | 8600 | 0.8249 | 0.8841 | 0.8841 |
| 0.0074 | 419.05 | 8800 | 0.8233 | 0.8811 | 0.8811 |
| 0.0082 | 428.57 | 9000 | 0.8354 | 0.8841 | 0.8841 |
| 0.0082 | 438.1 | 9200 | 0.8328 | 0.8811 | 0.8811 |
| 0.0064 | 447.62 | 9400 | 0.8586 | 0.8810 | 0.8811 |
| 0.006 | 457.14 | 9600 | 0.8593 | 0.8810 | 0.8811 |
| 0.0057 | 466.67 | 9800 | 0.8524 | 0.8811 | 0.8811 |
| 0.006 | 476.19 | 10000 | 0.8596 | 0.8810 | 0.8811 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_32768_512_43M", "model-index": [{"name": "GUE_mouse_2-seqsight_32768_512_43M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_mouse_2-seqsight_32768_512_43M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_32768_512_43M",
"region:us"
] | null | 2024-05-03T14:32:07+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us
| GUE\_mouse\_2-seqsight\_32768\_512\_43M-L8\_f
=============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_32768\_512\_43M on the mahdibaghbanzadeh/GUE\_mouse\_2 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.9727
* F1 Score: 0.8841
* Accuracy: 0.8841
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
43,
100,
5,
52
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_mouse_3-seqsight_32768_512_43M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_32768_512_43M](https://huggingface.co/mahdibaghbanzadeh/seqsight_32768_512_43M) on the [mahdibaghbanzadeh/GUE_mouse_3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_3) dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3538
- F1 Score: 0.8368
- Accuracy: 0.8368
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.4945 | 13.33 | 200 | 0.3604 | 0.8487 | 0.8494 |
| 0.25 | 26.67 | 400 | 0.4718 | 0.8410 | 0.8410 |
| 0.1496 | 40.0 | 600 | 0.5533 | 0.8448 | 0.8452 |
| 0.0947 | 53.33 | 800 | 0.6449 | 0.8200 | 0.8201 |
| 0.0622 | 66.67 | 1000 | 0.8307 | 0.8242 | 0.8243 |
| 0.0399 | 80.0 | 1200 | 0.9556 | 0.8033 | 0.8033 |
| 0.0309 | 93.33 | 1400 | 0.8286 | 0.8285 | 0.8285 |
| 0.0296 | 106.67 | 1600 | 1.0323 | 0.8282 | 0.8285 |
| 0.022 | 120.0 | 1800 | 0.9626 | 0.8282 | 0.8285 |
| 0.0206 | 133.33 | 2000 | 0.9417 | 0.8195 | 0.8201 |
| 0.019 | 146.67 | 2200 | 0.9371 | 0.8410 | 0.8410 |
| 0.0156 | 160.0 | 2400 | 0.9515 | 0.8408 | 0.8410 |
| 0.0147 | 173.33 | 2600 | 1.1014 | 0.8157 | 0.8159 |
| 0.0117 | 186.67 | 2800 | 0.9790 | 0.8326 | 0.8326 |
| 0.0142 | 200.0 | 3000 | 0.9529 | 0.8159 | 0.8159 |
| 0.0078 | 213.33 | 3200 | 1.1415 | 0.8325 | 0.8326 |
| 0.011 | 226.67 | 3400 | 1.0354 | 0.8200 | 0.8201 |
| 0.0079 | 240.0 | 3600 | 1.1383 | 0.8240 | 0.8243 |
| 0.0084 | 253.33 | 3800 | 0.9914 | 0.8408 | 0.8410 |
| 0.0074 | 266.67 | 4000 | 1.2147 | 0.8242 | 0.8243 |
| 0.0064 | 280.0 | 4200 | 1.1424 | 0.8409 | 0.8410 |
| 0.0067 | 293.33 | 4400 | 1.0934 | 0.8368 | 0.8368 |
| 0.0065 | 306.67 | 4600 | 1.0865 | 0.8326 | 0.8326 |
| 0.006 | 320.0 | 4800 | 1.3061 | 0.8282 | 0.8285 |
| 0.0071 | 333.33 | 5000 | 1.0516 | 0.8617 | 0.8619 |
| 0.0052 | 346.67 | 5200 | 1.0900 | 0.8410 | 0.8410 |
| 0.0049 | 360.0 | 5400 | 1.0540 | 0.8410 | 0.8410 |
| 0.0039 | 373.33 | 5600 | 1.0045 | 0.8367 | 0.8368 |
| 0.0039 | 386.67 | 5800 | 1.1885 | 0.8368 | 0.8368 |
| 0.0031 | 400.0 | 6000 | 1.3306 | 0.8282 | 0.8285 |
| 0.0044 | 413.33 | 6200 | 1.2414 | 0.8284 | 0.8285 |
| 0.0035 | 426.67 | 6400 | 1.1990 | 0.8325 | 0.8326 |
| 0.003 | 440.0 | 6600 | 1.2469 | 0.8449 | 0.8452 |
| 0.0034 | 453.33 | 6800 | 1.3184 | 0.8324 | 0.8326 |
| 0.0038 | 466.67 | 7000 | 1.3149 | 0.8368 | 0.8368 |
| 0.0023 | 480.0 | 7200 | 1.3285 | 0.8410 | 0.8410 |
| 0.0026 | 493.33 | 7400 | 1.3405 | 0.8368 | 0.8368 |
| 0.0023 | 506.67 | 7600 | 1.4812 | 0.8236 | 0.8243 |
| 0.0035 | 520.0 | 7800 | 1.2972 | 0.8326 | 0.8326 |
| 0.0024 | 533.33 | 8000 | 1.1751 | 0.8368 | 0.8368 |
| 0.0016 | 546.67 | 8200 | 1.2535 | 0.8368 | 0.8368 |
| 0.0018 | 560.0 | 8400 | 1.2629 | 0.8410 | 0.8410 |
| 0.0013 | 573.33 | 8600 | 1.2869 | 0.8451 | 0.8452 |
| 0.0012 | 586.67 | 8800 | 1.3690 | 0.8326 | 0.8326 |
| 0.0013 | 600.0 | 9000 | 1.4424 | 0.8410 | 0.8410 |
| 0.0012 | 613.33 | 9200 | 1.3960 | 0.8452 | 0.8452 |
| 0.0023 | 626.67 | 9400 | 1.3912 | 0.8325 | 0.8326 |
| 0.0011 | 640.0 | 9600 | 1.4004 | 0.8326 | 0.8326 |
| 0.0013 | 653.33 | 9800 | 1.3783 | 0.8409 | 0.8410 |
| 0.0011 | 666.67 | 10000 | 1.3855 | 0.8409 | 0.8410 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_32768_512_43M", "model-index": [{"name": "GUE_mouse_3-seqsight_32768_512_43M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_mouse_3-seqsight_32768_512_43M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_32768_512_43M",
"region:us"
] | null | 2024-05-03T14:32:26+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us
| GUE\_mouse\_3-seqsight\_32768\_512\_43M-L32\_f
==============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_32768\_512\_43M on the mahdibaghbanzadeh/GUE\_mouse\_3 dataset.
It achieves the following results on the evaluation set:
* Loss: 1.3538
* F1 Score: 0.8368
* Accuracy: 0.8368
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
43,
100,
5,
52
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_mouse_2-seqsight_32768_512_43M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_32768_512_43M](https://huggingface.co/mahdibaghbanzadeh/seqsight_32768_512_43M) on the [mahdibaghbanzadeh/GUE_mouse_2](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_2) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9569
- F1 Score: 0.8841
- Accuracy: 0.8841
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.3637 | 9.52 | 200 | 0.3209 | 0.8597 | 0.8598 |
| 0.2171 | 19.05 | 400 | 0.3311 | 0.8657 | 0.8659 |
| 0.1481 | 28.57 | 600 | 0.4566 | 0.8528 | 0.8537 |
| 0.0949 | 38.1 | 800 | 0.5511 | 0.8686 | 0.8689 |
| 0.0633 | 47.62 | 1000 | 0.5569 | 0.8718 | 0.8720 |
| 0.0448 | 57.14 | 1200 | 0.6590 | 0.8561 | 0.8567 |
| 0.0366 | 66.67 | 1400 | 0.7107 | 0.8626 | 0.8628 |
| 0.0287 | 76.19 | 1600 | 0.7808 | 0.8709 | 0.8720 |
| 0.0218 | 85.71 | 1800 | 0.6591 | 0.8745 | 0.875 |
| 0.02 | 95.24 | 2000 | 0.6549 | 0.8626 | 0.8628 |
| 0.014 | 104.76 | 2200 | 0.6894 | 0.8564 | 0.8567 |
| 0.0139 | 114.29 | 2400 | 0.6611 | 0.8718 | 0.8720 |
| 0.01 | 123.81 | 2600 | 0.8001 | 0.8687 | 0.8689 |
| 0.0122 | 133.33 | 2800 | 0.6653 | 0.8688 | 0.8689 |
| 0.0086 | 142.86 | 3000 | 0.7451 | 0.8777 | 0.8780 |
| 0.0089 | 152.38 | 3200 | 0.7197 | 0.8716 | 0.8720 |
| 0.0084 | 161.9 | 3400 | 0.7224 | 0.8655 | 0.8659 |
| 0.0054 | 171.43 | 3600 | 0.7890 | 0.8716 | 0.8720 |
| 0.0081 | 180.95 | 3800 | 0.6031 | 0.8932 | 0.8933 |
| 0.0082 | 190.48 | 4000 | 0.7296 | 0.8747 | 0.875 |
| 0.0045 | 200.0 | 4200 | 0.8088 | 0.8713 | 0.8720 |
| 0.0062 | 209.52 | 4400 | 0.7307 | 0.8654 | 0.8659 |
| 0.0045 | 219.05 | 4600 | 0.7611 | 0.8682 | 0.8689 |
| 0.0059 | 228.57 | 4800 | 0.6720 | 0.9055 | 0.9055 |
| 0.005 | 238.1 | 5000 | 0.7237 | 0.8809 | 0.8811 |
| 0.0053 | 247.62 | 5200 | 0.7023 | 0.8779 | 0.8780 |
| 0.0057 | 257.14 | 5400 | 0.6694 | 0.8838 | 0.8841 |
| 0.0022 | 266.67 | 5600 | 0.7388 | 0.8749 | 0.875 |
| 0.0028 | 276.19 | 5800 | 0.7900 | 0.8685 | 0.8689 |
| 0.0053 | 285.71 | 6000 | 0.8860 | 0.8618 | 0.8628 |
| 0.0036 | 295.24 | 6200 | 0.8072 | 0.8684 | 0.8689 |
| 0.0027 | 304.76 | 6400 | 0.9184 | 0.8713 | 0.8720 |
| 0.0023 | 314.29 | 6600 | 0.7922 | 0.8810 | 0.8811 |
| 0.003 | 323.81 | 6800 | 0.7958 | 0.8809 | 0.8811 |
| 0.0022 | 333.33 | 7000 | 0.8633 | 0.8778 | 0.8780 |
| 0.0024 | 342.86 | 7200 | 0.8159 | 0.8901 | 0.8902 |
| 0.0021 | 352.38 | 7400 | 0.8138 | 0.8777 | 0.8780 |
| 0.0018 | 361.9 | 7600 | 0.7816 | 0.8902 | 0.8902 |
| 0.0013 | 371.43 | 7800 | 0.8696 | 0.8685 | 0.8689 |
| 0.0014 | 380.95 | 8000 | 1.0004 | 0.8684 | 0.8689 |
| 0.0017 | 390.48 | 8200 | 0.9455 | 0.8778 | 0.8780 |
| 0.0022 | 400.0 | 8400 | 0.9596 | 0.8683 | 0.8689 |
| 0.0017 | 409.52 | 8600 | 0.8629 | 0.8809 | 0.8811 |
| 0.0018 | 419.05 | 8800 | 0.8128 | 0.8810 | 0.8811 |
| 0.0011 | 428.57 | 9000 | 0.9574 | 0.8775 | 0.8780 |
| 0.0013 | 438.1 | 9200 | 0.8122 | 0.8871 | 0.8872 |
| 0.0012 | 447.62 | 9400 | 0.8705 | 0.8779 | 0.8780 |
| 0.0009 | 457.14 | 9600 | 0.8878 | 0.8778 | 0.8780 |
| 0.0011 | 466.67 | 9800 | 0.8747 | 0.8840 | 0.8841 |
| 0.0008 | 476.19 | 10000 | 0.8885 | 0.8778 | 0.8780 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_32768_512_43M", "model-index": [{"name": "GUE_mouse_2-seqsight_32768_512_43M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_mouse_2-seqsight_32768_512_43M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_32768_512_43M",
"region:us"
] | null | 2024-05-03T14:33:01+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us
| GUE\_mouse\_2-seqsight\_32768\_512\_43M-L32\_f
==============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_32768\_512\_43M on the mahdibaghbanzadeh/GUE\_mouse\_2 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.9569
* F1 Score: 0.8841
* Accuracy: 0.8841
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
43,
100,
5,
52
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_splice_reconstructed-seqsight_32768_512_43M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_32768_512_43M](https://huggingface.co/mahdibaghbanzadeh/seqsight_32768_512_43M) on the [mahdibaghbanzadeh/GUE_splice_reconstructed](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_splice_reconstructed) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3190
- F1 Score: 0.8803
- Accuracy: 0.8799
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.955 | 0.7 | 200 | 0.8736 | 0.5246 | 0.5747 |
| 0.7473 | 1.4 | 400 | 0.5026 | 0.7909 | 0.7900 |
| 0.4937 | 2.1 | 600 | 0.4401 | 0.8176 | 0.8167 |
| 0.4529 | 2.8 | 800 | 0.4430 | 0.8182 | 0.8174 |
| 0.4353 | 3.5 | 1000 | 0.3943 | 0.8450 | 0.8446 |
| 0.4202 | 4.2 | 1200 | 0.4050 | 0.8374 | 0.8363 |
| 0.4033 | 4.9 | 1400 | 0.4056 | 0.8434 | 0.8428 |
| 0.3966 | 5.59 | 1600 | 0.3896 | 0.8496 | 0.8492 |
| 0.3876 | 6.29 | 1800 | 0.4151 | 0.8315 | 0.8303 |
| 0.3714 | 6.99 | 2000 | 0.3766 | 0.8560 | 0.8551 |
| 0.3707 | 7.69 | 2200 | 0.3852 | 0.8527 | 0.8518 |
| 0.3641 | 8.39 | 2400 | 0.3694 | 0.8636 | 0.8630 |
| 0.3618 | 9.09 | 2600 | 0.3902 | 0.8499 | 0.8488 |
| 0.3545 | 9.79 | 2800 | 0.3714 | 0.8630 | 0.8623 |
| 0.3512 | 10.49 | 3000 | 0.3696 | 0.8614 | 0.8608 |
| 0.3379 | 11.19 | 3200 | 0.3931 | 0.8535 | 0.8523 |
| 0.3396 | 11.89 | 3400 | 0.3698 | 0.8625 | 0.8617 |
| 0.3282 | 12.59 | 3600 | 0.3690 | 0.8606 | 0.8599 |
| 0.3322 | 13.29 | 3800 | 0.3559 | 0.8674 | 0.8665 |
| 0.3275 | 13.99 | 4000 | 0.3748 | 0.8580 | 0.8569 |
| 0.3202 | 14.69 | 4200 | 0.3845 | 0.8518 | 0.8509 |
| 0.3197 | 15.38 | 4400 | 0.3598 | 0.8666 | 0.8658 |
| 0.3156 | 16.08 | 4600 | 0.3774 | 0.8582 | 0.8573 |
| 0.314 | 16.78 | 4800 | 0.3483 | 0.8700 | 0.8694 |
| 0.303 | 17.48 | 5000 | 0.3798 | 0.8573 | 0.8564 |
| 0.3133 | 18.18 | 5200 | 0.3417 | 0.8740 | 0.8735 |
| 0.3057 | 18.88 | 5400 | 0.3506 | 0.8693 | 0.8687 |
| 0.2988 | 19.58 | 5600 | 0.3490 | 0.8725 | 0.8720 |
| 0.3043 | 20.28 | 5800 | 0.3407 | 0.8749 | 0.8744 |
| 0.2966 | 20.98 | 6000 | 0.3386 | 0.8758 | 0.8753 |
| 0.2959 | 21.68 | 6200 | 0.3516 | 0.8700 | 0.8694 |
| 0.289 | 22.38 | 6400 | 0.3414 | 0.8741 | 0.8735 |
| 0.2896 | 23.08 | 6600 | 0.3473 | 0.8738 | 0.8731 |
| 0.2931 | 23.78 | 6800 | 0.3315 | 0.8818 | 0.8814 |
| 0.287 | 24.48 | 7000 | 0.3565 | 0.8689 | 0.8683 |
| 0.287 | 25.17 | 7200 | 0.3469 | 0.8691 | 0.8683 |
| 0.2843 | 25.87 | 7400 | 0.3434 | 0.8730 | 0.8724 |
| 0.2799 | 26.57 | 7600 | 0.3439 | 0.8730 | 0.8724 |
| 0.2782 | 27.27 | 7800 | 0.3462 | 0.8758 | 0.8753 |
| 0.2818 | 27.97 | 8000 | 0.3415 | 0.8731 | 0.8724 |
| 0.2815 | 28.67 | 8200 | 0.3467 | 0.8711 | 0.8705 |
| 0.2753 | 29.37 | 8400 | 0.3368 | 0.8763 | 0.8757 |
| 0.2735 | 30.07 | 8600 | 0.3414 | 0.8754 | 0.8748 |
| 0.274 | 30.77 | 8800 | 0.3470 | 0.8723 | 0.8715 |
| 0.2724 | 31.47 | 9000 | 0.3402 | 0.8750 | 0.8744 |
| 0.2781 | 32.17 | 9200 | 0.3352 | 0.8772 | 0.8766 |
| 0.2713 | 32.87 | 9400 | 0.3393 | 0.8750 | 0.8744 |
| 0.2704 | 33.57 | 9600 | 0.3419 | 0.8742 | 0.8735 |
| 0.2708 | 34.27 | 9800 | 0.3381 | 0.8759 | 0.8753 |
| 0.2685 | 34.97 | 10000 | 0.3393 | 0.8748 | 0.8742 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_32768_512_43M", "model-index": [{"name": "GUE_splice_reconstructed-seqsight_32768_512_43M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_splice_reconstructed-seqsight_32768_512_43M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_32768_512_43M",
"region:us"
] | null | 2024-05-03T14:33:04+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us
| GUE\_splice\_reconstructed-seqsight\_32768\_512\_43M-L8\_f
==========================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_32768\_512\_43M on the mahdibaghbanzadeh/GUE\_splice\_reconstructed dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3190
* F1 Score: 0.8803
* Accuracy: 0.8799
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
43,
100,
5,
52
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_splice_reconstructed-seqsight_32768_512_43M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_32768_512_43M](https://huggingface.co/mahdibaghbanzadeh/seqsight_32768_512_43M) on the [mahdibaghbanzadeh/GUE_splice_reconstructed](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_splice_reconstructed) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3814
- F1 Score: 0.8453
- Accuracy: 0.8446
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.9745 | 0.7 | 200 | 0.9285 | 0.4080 | 0.5638 |
| 0.9123 | 1.4 | 400 | 0.8733 | 0.5272 | 0.5868 |
| 0.848 | 2.1 | 600 | 0.7708 | 0.6538 | 0.6578 |
| 0.5987 | 2.8 | 800 | 0.5152 | 0.7857 | 0.7847 |
| 0.5122 | 3.5 | 1000 | 0.4898 | 0.7984 | 0.7977 |
| 0.4944 | 4.2 | 1200 | 0.4562 | 0.8124 | 0.8115 |
| 0.4756 | 4.9 | 1400 | 0.4551 | 0.8166 | 0.8161 |
| 0.467 | 5.59 | 1600 | 0.4449 | 0.8165 | 0.8159 |
| 0.4573 | 6.29 | 1800 | 0.4526 | 0.8168 | 0.8159 |
| 0.4418 | 6.99 | 2000 | 0.4425 | 0.8206 | 0.8196 |
| 0.4422 | 7.69 | 2200 | 0.4192 | 0.8268 | 0.8262 |
| 0.4368 | 8.39 | 2400 | 0.4300 | 0.8210 | 0.8203 |
| 0.4357 | 9.09 | 2600 | 0.4314 | 0.8268 | 0.8257 |
| 0.4291 | 9.79 | 2800 | 0.4175 | 0.8278 | 0.8270 |
| 0.4281 | 10.49 | 3000 | 0.4251 | 0.8273 | 0.8264 |
| 0.4181 | 11.19 | 3200 | 0.4334 | 0.8246 | 0.8235 |
| 0.4198 | 11.89 | 3400 | 0.4158 | 0.8314 | 0.8306 |
| 0.4128 | 12.59 | 3600 | 0.4143 | 0.8311 | 0.8303 |
| 0.4151 | 13.29 | 3800 | 0.4052 | 0.8317 | 0.8308 |
| 0.4138 | 13.99 | 4000 | 0.4280 | 0.8261 | 0.8251 |
| 0.4053 | 14.69 | 4200 | 0.4194 | 0.8305 | 0.8297 |
| 0.4049 | 15.38 | 4400 | 0.4147 | 0.8313 | 0.8303 |
| 0.4087 | 16.08 | 4600 | 0.4333 | 0.8257 | 0.8246 |
| 0.404 | 16.78 | 4800 | 0.4041 | 0.8351 | 0.8341 |
| 0.3943 | 17.48 | 5000 | 0.4226 | 0.8296 | 0.8286 |
| 0.4033 | 18.18 | 5200 | 0.3893 | 0.8444 | 0.8437 |
| 0.3994 | 18.88 | 5400 | 0.4044 | 0.8357 | 0.8347 |
| 0.3947 | 19.58 | 5600 | 0.3910 | 0.8417 | 0.8411 |
| 0.3993 | 20.28 | 5800 | 0.3890 | 0.8437 | 0.8431 |
| 0.3917 | 20.98 | 6000 | 0.3907 | 0.8411 | 0.8404 |
| 0.3877 | 21.68 | 6200 | 0.4085 | 0.8353 | 0.8345 |
| 0.3868 | 22.38 | 6400 | 0.3970 | 0.8391 | 0.8382 |
| 0.3869 | 23.08 | 6600 | 0.3944 | 0.8417 | 0.8409 |
| 0.3902 | 23.78 | 6800 | 0.3884 | 0.8432 | 0.8424 |
| 0.3869 | 24.48 | 7000 | 0.4038 | 0.8356 | 0.8347 |
| 0.39 | 25.17 | 7200 | 0.3901 | 0.8424 | 0.8415 |
| 0.3843 | 25.87 | 7400 | 0.3998 | 0.8350 | 0.8341 |
| 0.3805 | 26.57 | 7600 | 0.4001 | 0.8370 | 0.8360 |
| 0.384 | 27.27 | 7800 | 0.3986 | 0.8378 | 0.8369 |
| 0.3811 | 27.97 | 8000 | 0.3961 | 0.8400 | 0.8391 |
| 0.386 | 28.67 | 8200 | 0.4003 | 0.8390 | 0.8382 |
| 0.3772 | 29.37 | 8400 | 0.3913 | 0.8433 | 0.8424 |
| 0.3804 | 30.07 | 8600 | 0.3866 | 0.8464 | 0.8457 |
| 0.3798 | 30.77 | 8800 | 0.3976 | 0.8385 | 0.8376 |
| 0.3768 | 31.47 | 9000 | 0.3913 | 0.8413 | 0.8404 |
| 0.3848 | 32.17 | 9200 | 0.3911 | 0.8419 | 0.8411 |
| 0.3749 | 32.87 | 9400 | 0.3915 | 0.8427 | 0.8420 |
| 0.3758 | 33.57 | 9600 | 0.3960 | 0.8391 | 0.8382 |
| 0.3729 | 34.27 | 9800 | 0.3913 | 0.8430 | 0.8422 |
| 0.3765 | 34.97 | 10000 | 0.3934 | 0.8408 | 0.8400 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_32768_512_43M", "model-index": [{"name": "GUE_splice_reconstructed-seqsight_32768_512_43M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_splice_reconstructed-seqsight_32768_512_43M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_32768_512_43M",
"region:us"
] | null | 2024-05-03T14:33:04+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us
| GUE\_splice\_reconstructed-seqsight\_32768\_512\_43M-L1\_f
==========================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_32768\_512\_43M on the mahdibaghbanzadeh/GUE\_splice\_reconstructed dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3814
* F1 Score: 0.8453
* Accuracy: 0.8446
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
43,
100,
5,
52
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | golf2248/yhcah4v | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-03T14:33:24+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
47,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |
text-generation | transformers | # flammenai/flammen16-mistral-7B AWQ
- Model creator: [flammenai](https://huggingface.co/flammenai)
- Original model: [flammen16-mistral-7B](https://huggingface.co/flammenai/flammen16-mistral-7B)
## How to use
### Install the necessary packages
```bash
pip install --upgrade autoawq autoawq-kernels
```
### Example Python code
```python
from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer, TextStreamer
model_path = "solidrust/flammen16-mistral-7B-AWQ"
system_message = "You are flammen16-mistral-7B, incarnated as a powerful AI. You were created by flammenai."
# Load model
model = AutoAWQForCausalLM.from_quantized(model_path,
fuse_layers=True)
tokenizer = AutoTokenizer.from_pretrained(model_path,
trust_remote_code=True)
streamer = TextStreamer(tokenizer,
skip_prompt=True,
skip_special_tokens=True)
# Convert prompt to tokens
prompt_template = """\
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant"""
prompt = "You're standing on the surface of the Earth. "\
"You walk one mile south, one mile west and one mile north. "\
"You end up exactly where you started. Where are you?"
tokens = tokenizer(prompt_template.format(system_message=system_message,prompt=prompt),
return_tensors='pt').input_ids.cuda()
# Generate output
generation_output = model.generate(tokens,
streamer=streamer,
max_new_tokens=512)
```
### About AWQ
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.
It is supported by:
- [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ
- [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types.
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
| {"library_name": "transformers", "tags": ["4-bit", "AWQ", "text-generation", "autotrain_compatible", "endpoints_compatible"], "pipeline_tag": "text-generation", "inference": false, "quantized_by": "Suparious"} | solidrust/flammen16-mistral-7B-AWQ | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"4-bit",
"AWQ",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-03T14:33:38+00:00 | [] | [] | TAGS
#transformers #safetensors #mistral #text-generation #4-bit #AWQ #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| # flammenai/flammen16-mistral-7B AWQ
- Model creator: flammenai
- Original model: flammen16-mistral-7B
## How to use
### Install the necessary packages
### Example Python code
### About AWQ
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.
It is supported by:
- Text Generation Webui - using Loader: AutoAWQ
- vLLM - version 0.2.2 or later for support for all model types.
- Hugging Face Text Generation Inference (TGI)
- Transformers version 4.35.0 and later, from any code or client that supports Transformers
- AutoAWQ - for use from Python code
| [
"# flammenai/flammen16-mistral-7B AWQ\n\n- Model creator: flammenai\n- Original model: flammen16-mistral-7B",
"## How to use",
"### Install the necessary packages",
"### Example Python code",
"### About AWQ\n\nAWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.\n\nAWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.\n\nIt is supported by:\n\n- Text Generation Webui - using Loader: AutoAWQ\n- vLLM - version 0.2.2 or later for support for all model types.\n- Hugging Face Text Generation Inference (TGI)\n- Transformers version 4.35.0 and later, from any code or client that supports Transformers\n- AutoAWQ - for use from Python code"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #4-bit #AWQ #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# flammenai/flammen16-mistral-7B AWQ\n\n- Model creator: flammenai\n- Original model: flammen16-mistral-7B",
"## How to use",
"### Install the necessary packages",
"### Example Python code",
"### About AWQ\n\nAWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.\n\nAWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.\n\nIt is supported by:\n\n- Text Generation Webui - using Loader: AutoAWQ\n- vLLM - version 0.2.2 or later for support for all model types.\n- Hugging Face Text Generation Inference (TGI)\n- Transformers version 4.35.0 and later, from any code or client that supports Transformers\n- AutoAWQ - for use from Python code"
] | [
41,
40,
5,
7,
6,
172
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #4-bit #AWQ #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# flammenai/flammen16-mistral-7B AWQ\n\n- Model creator: flammenai\n- Original model: flammen16-mistral-7B## How to use### Install the necessary packages### Example Python code### About AWQ\n\nAWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.\n\nAWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.\n\nIt is supported by:\n\n- Text Generation Webui - using Loader: AutoAWQ\n- vLLM - version 0.2.2 or later for support for all model types.\n- Hugging Face Text Generation Inference (TGI)\n- Transformers version 4.35.0 and later, from any code or client that supports Transformers\n- AutoAWQ - for use from Python code"
] |
feature-extraction | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | lyghter/2ch-wt-24-01-01-27480-3849-mel-512-pool-256e7-1x008-1-1 | null | [
"transformers",
"safetensors",
"xlm-roberta",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T14:34:41+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #xlm-roberta #feature-extraction #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #xlm-roberta #feature-extraction #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
35,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#transformers #safetensors #xlm-roberta #feature-extraction #arxiv-1910.09700 #endpoints_compatible #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# loha_fine_tuned_boolq
This model is a fine-tuned version of [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5655
- Accuracy: 0.7778
- F1: 0.6806
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 400
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-------:|:----:|:---------------:|:--------:|:------:|
| 0.6686 | 4.1667 | 50 | 0.6058 | 0.7778 | 0.6806 |
| 0.661 | 8.3333 | 100 | 0.5835 | 0.7778 | 0.6806 |
| 0.66 | 12.5 | 150 | 0.5765 | 0.7778 | 0.6806 |
| 0.6685 | 16.6667 | 200 | 0.5708 | 0.7778 | 0.6806 |
| 0.6634 | 20.8333 | 250 | 0.5677 | 0.7778 | 0.6806 |
| 0.6573 | 25.0 | 300 | 0.5668 | 0.7778 | 0.6806 |
| 0.6623 | 29.1667 | 350 | 0.5661 | 0.7778 | 0.6806 |
| 0.6583 | 33.3333 | 400 | 0.5655 | 0.7778 | 0.6806 |
### Framework versions
- PEFT 0.10.1.dev0
- Transformers 4.40.1
- Pytorch 2.1.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1 | {"license": "apache-2.0", "library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "base_model": "google-bert/bert-base-uncased", "model-index": [{"name": "loha_fine_tuned_boolq", "results": []}]} | anzeo/loha_fine_tuned_boolq | null | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"license:apache-2.0",
"region:us"
] | null | 2024-05-03T14:36:13+00:00 | [] | [] | TAGS
#peft #tensorboard #safetensors #generated_from_trainer #base_model-google-bert/bert-base-uncased #license-apache-2.0 #region-us
| loha\_fine\_tuned\_boolq
========================
This model is a fine-tuned version of google-bert/bert-base-uncased on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5655
* Accuracy: 0.7778
* F1: 0.6806
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 400
### Training results
### Framework versions
* PEFT 0.10.1.dev0
* Transformers 4.40.1
* Pytorch 2.1.1+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 400",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.1.dev0\n* Transformers 4.40.1\n* Pytorch 2.1.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#peft #tensorboard #safetensors #generated_from_trainer #base_model-google-bert/bert-base-uncased #license-apache-2.0 #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 400",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.1.dev0\n* Transformers 4.40.1\n* Pytorch 2.1.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
44,
99,
5,
55
] | [
"TAGS\n#peft #tensorboard #safetensors #generated_from_trainer #base_model-google-bert/bert-base-uncased #license-apache-2.0 #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 400### Training results### Framework versions\n\n\n* PEFT 0.10.1.dev0\n* Transformers 4.40.1\n* Pytorch 2.1.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | ferrazzipietro/LS_Llama-2-7b-hf_adapters_en.layer1_NoQuant_16_32_0.05_2_5e-05 | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T14:37:15+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
26,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | cilantro9246/hhdxmho | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-03T14:37:19+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
47,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | grrvk/palm-instance | null | [
"transformers",
"safetensors",
"mask2former",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T14:39:22+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #mask2former #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #mask2former #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
31,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#transformers #safetensors #mask2former #arxiv-1910.09700 #endpoints_compatible #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# robust_llm_pythia-70m_niki-041a_imdb_random-token-1280_10-rounds_seed-4
This model is a fine-tuned version of [EleutherAI/pythia-70m](https://huggingface.co/EleutherAI/pythia-70m) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "EleutherAI/pythia-70m", "model-index": [{"name": "robust_llm_pythia-70m_niki-041a_imdb_random-token-1280_10-rounds_seed-4", "results": []}]} | AlignmentResearch/robust_llm_pythia-70m_niki-041a_imdb_random-token-1280_10-rounds_seed-4 | null | [
"transformers",
"tensorboard",
"safetensors",
"gpt_neox",
"text-classification",
"generated_from_trainer",
"base_model:EleutherAI/pythia-70m",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-03T14:39:23+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #gpt_neox #text-classification #generated_from_trainer #base_model-EleutherAI/pythia-70m #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# robust_llm_pythia-70m_niki-041a_imdb_random-token-1280_10-rounds_seed-4
This model is a fine-tuned version of EleutherAI/pythia-70m on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1
- Datasets 2.18.0
- Tokenizers 0.15.2
| [
"# robust_llm_pythia-70m_niki-041a_imdb_random-token-1280_10-rounds_seed-4\n\nThis model is a fine-tuned version of EleutherAI/pythia-70m on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 4\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.1\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #gpt_neox #text-classification #generated_from_trainer #base_model-EleutherAI/pythia-70m #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# robust_llm_pythia-70m_niki-041a_imdb_random-token-1280_10-rounds_seed-4\n\nThis model is a fine-tuned version of EleutherAI/pythia-70m on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 4\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.1\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] | [
70,
63,
7,
9,
9,
4,
93,
5,
40
] | [
"TAGS\n#transformers #tensorboard #safetensors #gpt_neox #text-classification #generated_from_trainer #base_model-EleutherAI/pythia-70m #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# robust_llm_pythia-70m_niki-041a_imdb_random-token-1280_10-rounds_seed-4\n\nThis model is a fine-tuned version of EleutherAI/pythia-70m on an unknown dataset.## Model description\n\nMore information needed## Intended uses & limitations\n\nMore information needed## Training and evaluation data\n\nMore information needed## Training procedure### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 4\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1### Training results### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.1\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_splice_reconstructed-seqsight_32768_512_43M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_32768_512_43M](https://huggingface.co/mahdibaghbanzadeh/seqsight_32768_512_43M) on the [mahdibaghbanzadeh/GUE_splice_reconstructed](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_splice_reconstructed) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2944
- F1 Score: 0.8969
- Accuracy: 0.8965
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.9188 | 0.7 | 200 | 0.7935 | 0.5622 | 0.6243 |
| 0.6099 | 1.4 | 400 | 0.4862 | 0.7908 | 0.7898 |
| 0.4544 | 2.1 | 600 | 0.4126 | 0.8366 | 0.8356 |
| 0.4102 | 2.8 | 800 | 0.4036 | 0.8410 | 0.8402 |
| 0.3846 | 3.5 | 1000 | 0.3809 | 0.8568 | 0.8558 |
| 0.3753 | 4.2 | 1200 | 0.3684 | 0.8555 | 0.8547 |
| 0.3527 | 4.9 | 1400 | 0.3712 | 0.8595 | 0.8588 |
| 0.3435 | 5.59 | 1600 | 0.3718 | 0.8604 | 0.8606 |
| 0.3326 | 6.29 | 1800 | 0.3858 | 0.8508 | 0.8498 |
| 0.315 | 6.99 | 2000 | 0.3399 | 0.8746 | 0.8742 |
| 0.3118 | 7.69 | 2200 | 0.3469 | 0.8706 | 0.8700 |
| 0.3028 | 8.39 | 2400 | 0.3435 | 0.8760 | 0.8757 |
| 0.2986 | 9.09 | 2600 | 0.3606 | 0.8704 | 0.8696 |
| 0.2917 | 9.79 | 2800 | 0.3505 | 0.8754 | 0.8748 |
| 0.2865 | 10.49 | 3000 | 0.3394 | 0.8789 | 0.8783 |
| 0.276 | 11.19 | 3200 | 0.3682 | 0.8661 | 0.8652 |
| 0.2786 | 11.89 | 3400 | 0.3400 | 0.8816 | 0.8810 |
| 0.2643 | 12.59 | 3600 | 0.3552 | 0.8745 | 0.8740 |
| 0.2619 | 13.29 | 3800 | 0.3251 | 0.8830 | 0.8825 |
| 0.2603 | 13.99 | 4000 | 0.3561 | 0.8706 | 0.8696 |
| 0.2511 | 14.69 | 4200 | 0.3451 | 0.8765 | 0.8757 |
| 0.2533 | 15.38 | 4400 | 0.3412 | 0.8823 | 0.8819 |
| 0.2457 | 16.08 | 4600 | 0.3587 | 0.8733 | 0.8726 |
| 0.2401 | 16.78 | 4800 | 0.3349 | 0.8826 | 0.8821 |
| 0.2311 | 17.48 | 5000 | 0.3627 | 0.8715 | 0.8707 |
| 0.2391 | 18.18 | 5200 | 0.3327 | 0.8834 | 0.8829 |
| 0.2315 | 18.88 | 5400 | 0.3346 | 0.8841 | 0.8836 |
| 0.2267 | 19.58 | 5600 | 0.3367 | 0.8862 | 0.8858 |
| 0.2274 | 20.28 | 5800 | 0.3212 | 0.8918 | 0.8915 |
| 0.2223 | 20.98 | 6000 | 0.3144 | 0.8921 | 0.8917 |
| 0.2185 | 21.68 | 6200 | 0.3224 | 0.8919 | 0.8915 |
| 0.2143 | 22.38 | 6400 | 0.3305 | 0.8870 | 0.8865 |
| 0.2132 | 23.08 | 6600 | 0.3316 | 0.8871 | 0.8867 |
| 0.2133 | 23.78 | 6800 | 0.3171 | 0.8943 | 0.8939 |
| 0.206 | 24.48 | 7000 | 0.3350 | 0.8855 | 0.8851 |
| 0.2058 | 25.17 | 7200 | 0.3413 | 0.8843 | 0.8838 |
| 0.2045 | 25.87 | 7400 | 0.3234 | 0.8893 | 0.8889 |
| 0.2025 | 26.57 | 7600 | 0.3352 | 0.8870 | 0.8865 |
| 0.1978 | 27.27 | 7800 | 0.3420 | 0.8867 | 0.8862 |
| 0.1981 | 27.97 | 8000 | 0.3323 | 0.8902 | 0.8897 |
| 0.1971 | 28.67 | 8200 | 0.3340 | 0.8891 | 0.8886 |
| 0.1926 | 29.37 | 8400 | 0.3222 | 0.8899 | 0.8895 |
| 0.1915 | 30.07 | 8600 | 0.3282 | 0.8925 | 0.8922 |
| 0.1907 | 30.77 | 8800 | 0.3352 | 0.8909 | 0.8904 |
| 0.1877 | 31.47 | 9000 | 0.3349 | 0.8905 | 0.8900 |
| 0.1916 | 32.17 | 9200 | 0.3329 | 0.8907 | 0.8902 |
| 0.1826 | 32.87 | 9400 | 0.3334 | 0.8917 | 0.8913 |
| 0.1879 | 33.57 | 9600 | 0.3380 | 0.8894 | 0.8889 |
| 0.1849 | 34.27 | 9800 | 0.3340 | 0.8898 | 0.8893 |
| 0.184 | 34.97 | 10000 | 0.3356 | 0.8898 | 0.8893 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_32768_512_43M", "model-index": [{"name": "GUE_splice_reconstructed-seqsight_32768_512_43M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_splice_reconstructed-seqsight_32768_512_43M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_32768_512_43M",
"region:us"
] | null | 2024-05-03T14:40:59+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us
| GUE\_splice\_reconstructed-seqsight\_32768\_512\_43M-L32\_f
===========================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_32768\_512\_43M on the mahdibaghbanzadeh/GUE\_splice\_reconstructed dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2944
* F1 Score: 0.8969
* Accuracy: 0.8965
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
43,
100,
5,
52
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_tf_0-seqsight_32768_512_43M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_32768_512_43M](https://huggingface.co/mahdibaghbanzadeh/seqsight_32768_512_43M) on the [mahdibaghbanzadeh/GUE_tf_0](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_0) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3686
- F1 Score: 0.8378
- Accuracy: 0.838
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5602 | 0.79 | 200 | 0.4708 | 0.7707 | 0.771 |
| 0.4815 | 1.58 | 400 | 0.4658 | 0.7708 | 0.771 |
| 0.4804 | 2.37 | 600 | 0.4645 | 0.7702 | 0.771 |
| 0.4702 | 3.16 | 800 | 0.4669 | 0.7771 | 0.777 |
| 0.4657 | 3.95 | 1000 | 0.4622 | 0.7780 | 0.778 |
| 0.4641 | 4.74 | 1200 | 0.4698 | 0.7720 | 0.772 |
| 0.4623 | 5.53 | 1400 | 0.4627 | 0.7731 | 0.773 |
| 0.4552 | 6.32 | 1600 | 0.4624 | 0.7750 | 0.775 |
| 0.4605 | 7.11 | 1800 | 0.4681 | 0.7789 | 0.779 |
| 0.4595 | 7.91 | 2000 | 0.4625 | 0.7730 | 0.773 |
| 0.4542 | 8.7 | 2200 | 0.4602 | 0.7780 | 0.778 |
| 0.4525 | 9.49 | 2400 | 0.4544 | 0.7756 | 0.776 |
| 0.4524 | 10.28 | 2600 | 0.4749 | 0.7754 | 0.776 |
| 0.4505 | 11.07 | 2800 | 0.4669 | 0.7748 | 0.775 |
| 0.4501 | 11.86 | 3000 | 0.4698 | 0.7709 | 0.771 |
| 0.4502 | 12.65 | 3200 | 0.4697 | 0.7795 | 0.78 |
| 0.4485 | 13.44 | 3400 | 0.4513 | 0.7860 | 0.786 |
| 0.4476 | 14.23 | 3600 | 0.4519 | 0.7831 | 0.783 |
| 0.4461 | 15.02 | 3800 | 0.4619 | 0.7799 | 0.78 |
| 0.4474 | 15.81 | 4000 | 0.4591 | 0.7799 | 0.78 |
| 0.4425 | 16.6 | 4200 | 0.4516 | 0.7811 | 0.781 |
| 0.4458 | 17.39 | 4400 | 0.4671 | 0.7817 | 0.782 |
| 0.4429 | 18.18 | 4600 | 0.4565 | 0.7809 | 0.781 |
| 0.4412 | 18.97 | 4800 | 0.4664 | 0.7764 | 0.777 |
| 0.4449 | 19.76 | 5000 | 0.4518 | 0.7810 | 0.781 |
| 0.4425 | 20.55 | 5200 | 0.4483 | 0.7821 | 0.782 |
| 0.4386 | 21.34 | 5400 | 0.4535 | 0.7830 | 0.783 |
| 0.4392 | 22.13 | 5600 | 0.4586 | 0.7799 | 0.78 |
| 0.4422 | 22.92 | 5800 | 0.4534 | 0.7820 | 0.782 |
| 0.4357 | 23.72 | 6000 | 0.4632 | 0.7798 | 0.78 |
| 0.4382 | 24.51 | 6200 | 0.4507 | 0.7840 | 0.784 |
| 0.4415 | 25.3 | 6400 | 0.4554 | 0.7840 | 0.784 |
| 0.4336 | 26.09 | 6600 | 0.4530 | 0.7850 | 0.785 |
| 0.4353 | 26.88 | 6800 | 0.4604 | 0.7839 | 0.784 |
| 0.4368 | 27.67 | 7000 | 0.4507 | 0.7801 | 0.78 |
| 0.4372 | 28.46 | 7200 | 0.4485 | 0.7801 | 0.78 |
| 0.4377 | 29.25 | 7400 | 0.4544 | 0.7850 | 0.785 |
| 0.4395 | 30.04 | 7600 | 0.4516 | 0.7801 | 0.78 |
| 0.4366 | 30.83 | 7800 | 0.4540 | 0.7880 | 0.788 |
| 0.4378 | 31.62 | 8000 | 0.4494 | 0.7790 | 0.779 |
| 0.4342 | 32.41 | 8200 | 0.4553 | 0.7860 | 0.786 |
| 0.4315 | 33.2 | 8400 | 0.4560 | 0.7850 | 0.785 |
| 0.437 | 33.99 | 8600 | 0.4525 | 0.7811 | 0.781 |
| 0.4358 | 34.78 | 8800 | 0.4510 | 0.7811 | 0.781 |
| 0.4303 | 35.57 | 9000 | 0.4537 | 0.7810 | 0.781 |
| 0.4335 | 36.36 | 9200 | 0.4584 | 0.7850 | 0.785 |
| 0.4381 | 37.15 | 9400 | 0.4580 | 0.7850 | 0.785 |
| 0.4336 | 37.94 | 9600 | 0.4540 | 0.7840 | 0.784 |
| 0.4343 | 38.74 | 9800 | 0.4554 | 0.788 | 0.788 |
| 0.4349 | 39.53 | 10000 | 0.4550 | 0.7880 | 0.788 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_32768_512_43M", "model-index": [{"name": "GUE_tf_0-seqsight_32768_512_43M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_tf_0-seqsight_32768_512_43M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_32768_512_43M",
"region:us"
] | null | 2024-05-03T14:41:03+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us
| GUE\_tf\_0-seqsight\_32768\_512\_43M-L1\_f
==========================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_32768\_512\_43M on the mahdibaghbanzadeh/GUE\_tf\_0 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3686
* F1 Score: 0.8378
* Accuracy: 0.838
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
43,
100,
5,
52
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_tf_0-seqsight_32768_512_43M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_32768_512_43M](https://huggingface.co/mahdibaghbanzadeh/seqsight_32768_512_43M) on the [mahdibaghbanzadeh/GUE_tf_0](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_0) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3670
- F1 Score: 0.8379
- Accuracy: 0.838
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5365 | 0.79 | 200 | 0.4690 | 0.7631 | 0.764 |
| 0.4729 | 1.58 | 400 | 0.4592 | 0.7785 | 0.779 |
| 0.4706 | 2.37 | 600 | 0.4590 | 0.7719 | 0.773 |
| 0.4597 | 3.16 | 800 | 0.4564 | 0.7850 | 0.785 |
| 0.456 | 3.95 | 1000 | 0.4520 | 0.7830 | 0.783 |
| 0.4526 | 4.74 | 1200 | 0.4576 | 0.7830 | 0.783 |
| 0.4497 | 5.53 | 1400 | 0.4657 | 0.7714 | 0.772 |
| 0.4429 | 6.32 | 1600 | 0.4557 | 0.7727 | 0.773 |
| 0.4464 | 7.11 | 1800 | 0.4691 | 0.7730 | 0.774 |
| 0.4436 | 7.91 | 2000 | 0.4641 | 0.7715 | 0.772 |
| 0.4381 | 8.7 | 2200 | 0.4490 | 0.7890 | 0.789 |
| 0.4357 | 9.49 | 2400 | 0.4437 | 0.7839 | 0.784 |
| 0.4356 | 10.28 | 2600 | 0.4569 | 0.7848 | 0.785 |
| 0.4325 | 11.07 | 2800 | 0.4617 | 0.7724 | 0.773 |
| 0.4314 | 11.86 | 3000 | 0.4716 | 0.7847 | 0.785 |
| 0.4299 | 12.65 | 3200 | 0.4579 | 0.7827 | 0.783 |
| 0.4259 | 13.44 | 3400 | 0.4478 | 0.7910 | 0.791 |
| 0.4262 | 14.23 | 3600 | 0.4493 | 0.7979 | 0.798 |
| 0.4243 | 15.02 | 3800 | 0.4594 | 0.7877 | 0.788 |
| 0.4245 | 15.81 | 4000 | 0.4492 | 0.7838 | 0.784 |
| 0.418 | 16.6 | 4200 | 0.4491 | 0.7831 | 0.783 |
| 0.4219 | 17.39 | 4400 | 0.4660 | 0.7825 | 0.783 |
| 0.4193 | 18.18 | 4600 | 0.4481 | 0.7920 | 0.792 |
| 0.4168 | 18.97 | 4800 | 0.4636 | 0.7813 | 0.782 |
| 0.4186 | 19.76 | 5000 | 0.4498 | 0.7909 | 0.791 |
| 0.4187 | 20.55 | 5200 | 0.4455 | 0.7970 | 0.797 |
| 0.4117 | 21.34 | 5400 | 0.4562 | 0.7854 | 0.786 |
| 0.4139 | 22.13 | 5600 | 0.4638 | 0.7745 | 0.775 |
| 0.4143 | 22.92 | 5800 | 0.4511 | 0.7859 | 0.786 |
| 0.4071 | 23.72 | 6000 | 0.4612 | 0.7837 | 0.784 |
| 0.4098 | 24.51 | 6200 | 0.4475 | 0.7909 | 0.791 |
| 0.4147 | 25.3 | 6400 | 0.4531 | 0.7858 | 0.786 |
| 0.4038 | 26.09 | 6600 | 0.4504 | 0.7961 | 0.796 |
| 0.4061 | 26.88 | 6800 | 0.4573 | 0.7867 | 0.787 |
| 0.4082 | 27.67 | 7000 | 0.4481 | 0.7960 | 0.796 |
| 0.4072 | 28.46 | 7200 | 0.4474 | 0.7971 | 0.797 |
| 0.4081 | 29.25 | 7400 | 0.4574 | 0.7898 | 0.79 |
| 0.4089 | 30.04 | 7600 | 0.4573 | 0.7897 | 0.79 |
| 0.4056 | 30.83 | 7800 | 0.4507 | 0.7920 | 0.792 |
| 0.4047 | 31.62 | 8000 | 0.4455 | 0.7931 | 0.793 |
| 0.4034 | 32.41 | 8200 | 0.4576 | 0.7938 | 0.794 |
| 0.4008 | 33.2 | 8400 | 0.4555 | 0.7958 | 0.796 |
| 0.4052 | 33.99 | 8600 | 0.4521 | 0.7939 | 0.794 |
| 0.4031 | 34.78 | 8800 | 0.4488 | 0.7970 | 0.797 |
| 0.398 | 35.57 | 9000 | 0.4528 | 0.7950 | 0.795 |
| 0.4018 | 36.36 | 9200 | 0.4607 | 0.7835 | 0.784 |
| 0.4043 | 37.15 | 9400 | 0.4594 | 0.7877 | 0.788 |
| 0.401 | 37.94 | 9600 | 0.4531 | 0.7969 | 0.797 |
| 0.403 | 38.74 | 9800 | 0.4550 | 0.7928 | 0.793 |
| 0.4019 | 39.53 | 10000 | 0.4545 | 0.7938 | 0.794 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_32768_512_43M", "model-index": [{"name": "GUE_tf_0-seqsight_32768_512_43M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_tf_0-seqsight_32768_512_43M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_32768_512_43M",
"region:us"
] | null | 2024-05-03T14:41:03+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us
| GUE\_tf\_0-seqsight\_32768\_512\_43M-L8\_f
==========================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_32768\_512\_43M on the mahdibaghbanzadeh/GUE\_tf\_0 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3670
* F1 Score: 0.8379
* Accuracy: 0.838
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
43,
100,
5,
52
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_tf_0-seqsight_32768_512_43M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_32768_512_43M](https://huggingface.co/mahdibaghbanzadeh/seqsight_32768_512_43M) on the [mahdibaghbanzadeh/GUE_tf_0](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_0) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3652
- F1 Score: 0.8384
- Accuracy: 0.839
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5168 | 0.79 | 200 | 0.4704 | 0.7720 | 0.772 |
| 0.4668 | 1.58 | 400 | 0.4545 | 0.7810 | 0.781 |
| 0.4609 | 2.37 | 600 | 0.4539 | 0.7782 | 0.779 |
| 0.4497 | 3.16 | 800 | 0.4563 | 0.7888 | 0.789 |
| 0.4438 | 3.95 | 1000 | 0.4478 | 0.7820 | 0.782 |
| 0.439 | 4.74 | 1200 | 0.4575 | 0.7790 | 0.779 |
| 0.4338 | 5.53 | 1400 | 0.4695 | 0.7697 | 0.771 |
| 0.4262 | 6.32 | 1600 | 0.4528 | 0.7849 | 0.785 |
| 0.4298 | 7.11 | 1800 | 0.4694 | 0.7839 | 0.785 |
| 0.4251 | 7.91 | 2000 | 0.4612 | 0.7826 | 0.783 |
| 0.4186 | 8.7 | 2200 | 0.4532 | 0.7910 | 0.791 |
| 0.4135 | 9.49 | 2400 | 0.4460 | 0.8070 | 0.807 |
| 0.4145 | 10.28 | 2600 | 0.4561 | 0.7835 | 0.784 |
| 0.4106 | 11.07 | 2800 | 0.4560 | 0.7907 | 0.791 |
| 0.4085 | 11.86 | 3000 | 0.4680 | 0.7936 | 0.794 |
| 0.404 | 12.65 | 3200 | 0.4566 | 0.7887 | 0.789 |
| 0.3998 | 13.44 | 3400 | 0.4553 | 0.7969 | 0.797 |
| 0.3977 | 14.23 | 3600 | 0.4474 | 0.7978 | 0.798 |
| 0.3957 | 15.02 | 3800 | 0.4564 | 0.7936 | 0.794 |
| 0.3928 | 15.81 | 4000 | 0.4417 | 0.7969 | 0.797 |
| 0.3856 | 16.6 | 4200 | 0.4596 | 0.7910 | 0.791 |
| 0.3891 | 17.39 | 4400 | 0.4669 | 0.7914 | 0.792 |
| 0.3839 | 18.18 | 4600 | 0.4529 | 0.7991 | 0.799 |
| 0.3802 | 18.97 | 4800 | 0.4674 | 0.7897 | 0.79 |
| 0.3798 | 19.76 | 5000 | 0.4528 | 0.8020 | 0.802 |
| 0.3777 | 20.55 | 5200 | 0.4634 | 0.7960 | 0.796 |
| 0.3715 | 21.34 | 5400 | 0.4812 | 0.7914 | 0.792 |
| 0.371 | 22.13 | 5600 | 0.4803 | 0.7957 | 0.796 |
| 0.3702 | 22.92 | 5800 | 0.4591 | 0.7940 | 0.794 |
| 0.3635 | 23.72 | 6000 | 0.4693 | 0.7959 | 0.796 |
| 0.3641 | 24.51 | 6200 | 0.4534 | 0.7888 | 0.789 |
| 0.3669 | 25.3 | 6400 | 0.4633 | 0.7939 | 0.794 |
| 0.3543 | 26.09 | 6600 | 0.4677 | 0.7990 | 0.799 |
| 0.3575 | 26.88 | 6800 | 0.4748 | 0.7916 | 0.792 |
| 0.3568 | 27.67 | 7000 | 0.4710 | 0.8010 | 0.801 |
| 0.3552 | 28.46 | 7200 | 0.4785 | 0.8000 | 0.8 |
| 0.3536 | 29.25 | 7400 | 0.4861 | 0.7907 | 0.791 |
| 0.3554 | 30.04 | 7600 | 0.4939 | 0.7812 | 0.782 |
| 0.3493 | 30.83 | 7800 | 0.4906 | 0.7945 | 0.795 |
| 0.3475 | 31.62 | 8000 | 0.4792 | 0.7940 | 0.794 |
| 0.3487 | 32.41 | 8200 | 0.4886 | 0.7935 | 0.794 |
| 0.3432 | 33.2 | 8400 | 0.4888 | 0.7987 | 0.799 |
| 0.3442 | 33.99 | 8600 | 0.4880 | 0.7968 | 0.797 |
| 0.3425 | 34.78 | 8800 | 0.4801 | 0.7929 | 0.793 |
| 0.339 | 35.57 | 9000 | 0.4823 | 0.7958 | 0.796 |
| 0.3385 | 36.36 | 9200 | 0.4975 | 0.7934 | 0.794 |
| 0.3431 | 37.15 | 9400 | 0.5009 | 0.7934 | 0.794 |
| 0.3398 | 37.94 | 9600 | 0.4922 | 0.7976 | 0.798 |
| 0.3427 | 38.74 | 9800 | 0.4941 | 0.7985 | 0.799 |
| 0.3397 | 39.53 | 10000 | 0.4934 | 0.7966 | 0.797 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_32768_512_43M", "model-index": [{"name": "GUE_tf_0-seqsight_32768_512_43M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_tf_0-seqsight_32768_512_43M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_32768_512_43M",
"region:us"
] | null | 2024-05-03T14:41:15+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us
| GUE\_tf\_0-seqsight\_32768\_512\_43M-L32\_f
===========================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_32768\_512\_43M on the mahdibaghbanzadeh/GUE\_tf\_0 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3652
* F1 Score: 0.8384
* Accuracy: 0.839
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
43,
100,
5,
52
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_tf_1-seqsight_32768_512_43M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_32768_512_43M](https://huggingface.co/mahdibaghbanzadeh/seqsight_32768_512_43M) on the [mahdibaghbanzadeh/GUE_tf_1](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_1) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3292
- F1 Score: 0.8619
- Accuracy: 0.862
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5753 | 0.83 | 200 | 0.5324 | 0.7460 | 0.746 |
| 0.5032 | 1.67 | 400 | 0.5240 | 0.7560 | 0.756 |
| 0.4958 | 2.5 | 600 | 0.5291 | 0.7452 | 0.746 |
| 0.4912 | 3.33 | 800 | 0.5346 | 0.7416 | 0.744 |
| 0.4895 | 4.17 | 1000 | 0.5222 | 0.7441 | 0.745 |
| 0.4823 | 5.0 | 1200 | 0.5219 | 0.7545 | 0.755 |
| 0.4829 | 5.83 | 1400 | 0.5129 | 0.7566 | 0.757 |
| 0.4811 | 6.67 | 1600 | 0.5231 | 0.7508 | 0.752 |
| 0.4796 | 7.5 | 1800 | 0.5094 | 0.7617 | 0.762 |
| 0.4742 | 8.33 | 2000 | 0.5179 | 0.7529 | 0.754 |
| 0.4782 | 9.17 | 2200 | 0.5183 | 0.7549 | 0.756 |
| 0.4743 | 10.0 | 2400 | 0.5211 | 0.7608 | 0.762 |
| 0.4736 | 10.83 | 2600 | 0.5208 | 0.7544 | 0.756 |
| 0.476 | 11.67 | 2800 | 0.5182 | 0.7502 | 0.752 |
| 0.4653 | 12.5 | 3000 | 0.5132 | 0.7607 | 0.762 |
| 0.4765 | 13.33 | 3200 | 0.5096 | 0.7540 | 0.755 |
| 0.4698 | 14.17 | 3400 | 0.5039 | 0.7586 | 0.759 |
| 0.4679 | 15.0 | 3600 | 0.5267 | 0.7440 | 0.747 |
| 0.4667 | 15.83 | 3800 | 0.5089 | 0.7536 | 0.755 |
| 0.469 | 16.67 | 4000 | 0.5170 | 0.7490 | 0.751 |
| 0.4635 | 17.5 | 4200 | 0.5128 | 0.7552 | 0.757 |
| 0.4626 | 18.33 | 4400 | 0.5197 | 0.7558 | 0.757 |
| 0.4654 | 19.17 | 4600 | 0.5184 | 0.7466 | 0.749 |
| 0.4663 | 20.0 | 4800 | 0.5068 | 0.7577 | 0.759 |
| 0.4641 | 20.83 | 5000 | 0.5086 | 0.7640 | 0.765 |
| 0.4619 | 21.67 | 5200 | 0.5033 | 0.7632 | 0.764 |
| 0.4611 | 22.5 | 5400 | 0.5023 | 0.7566 | 0.757 |
| 0.4627 | 23.33 | 5600 | 0.5086 | 0.7616 | 0.763 |
| 0.4608 | 24.17 | 5800 | 0.5129 | 0.7565 | 0.758 |
| 0.4619 | 25.0 | 6000 | 0.5022 | 0.7525 | 0.753 |
| 0.4577 | 25.83 | 6200 | 0.5043 | 0.7562 | 0.757 |
| 0.4599 | 26.67 | 6400 | 0.4997 | 0.7568 | 0.757 |
| 0.4641 | 27.5 | 6600 | 0.5066 | 0.7571 | 0.758 |
| 0.4596 | 28.33 | 6800 | 0.5053 | 0.7601 | 0.761 |
| 0.4601 | 29.17 | 7000 | 0.5050 | 0.7621 | 0.763 |
| 0.4597 | 30.0 | 7200 | 0.5037 | 0.7590 | 0.76 |
| 0.458 | 30.83 | 7400 | 0.5124 | 0.7511 | 0.753 |
| 0.4566 | 31.67 | 7600 | 0.5007 | 0.7603 | 0.761 |
| 0.4605 | 32.5 | 7800 | 0.5002 | 0.7592 | 0.76 |
| 0.4554 | 33.33 | 8000 | 0.5048 | 0.7568 | 0.758 |
| 0.4596 | 34.17 | 8200 | 0.5029 | 0.7621 | 0.763 |
| 0.4559 | 35.0 | 8400 | 0.5016 | 0.7591 | 0.76 |
| 0.4562 | 35.83 | 8600 | 0.5031 | 0.7590 | 0.76 |
| 0.4577 | 36.67 | 8800 | 0.5046 | 0.7597 | 0.761 |
| 0.4529 | 37.5 | 9000 | 0.5138 | 0.7467 | 0.749 |
| 0.4558 | 38.33 | 9200 | 0.5047 | 0.7619 | 0.763 |
| 0.4549 | 39.17 | 9400 | 0.5045 | 0.7600 | 0.761 |
| 0.4568 | 40.0 | 9600 | 0.5059 | 0.7628 | 0.764 |
| 0.4546 | 40.83 | 9800 | 0.5018 | 0.7562 | 0.757 |
| 0.4577 | 41.67 | 10000 | 0.5032 | 0.7600 | 0.761 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_32768_512_43M", "model-index": [{"name": "GUE_tf_1-seqsight_32768_512_43M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_tf_1-seqsight_32768_512_43M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_32768_512_43M",
"region:us"
] | null | 2024-05-03T14:41:45+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us
| GUE\_tf\_1-seqsight\_32768\_512\_43M-L1\_f
==========================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_32768\_512\_43M on the mahdibaghbanzadeh/GUE\_tf\_1 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3292
* F1 Score: 0.8619
* Accuracy: 0.862
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
43,
100,
5,
52
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_tf_1-seqsight_32768_512_43M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_32768_512_43M](https://huggingface.co/mahdibaghbanzadeh/seqsight_32768_512_43M) on the [mahdibaghbanzadeh/GUE_tf_1](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_1) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3159
- F1 Score: 0.8648
- Accuracy: 0.865
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5511 | 0.83 | 200 | 0.5232 | 0.7560 | 0.756 |
| 0.4924 | 1.67 | 400 | 0.5161 | 0.758 | 0.758 |
| 0.4859 | 2.5 | 600 | 0.5182 | 0.7533 | 0.754 |
| 0.4813 | 3.33 | 800 | 0.5122 | 0.7458 | 0.747 |
| 0.4788 | 4.17 | 1000 | 0.5201 | 0.7502 | 0.752 |
| 0.4714 | 5.0 | 1200 | 0.5120 | 0.7552 | 0.756 |
| 0.469 | 5.83 | 1400 | 0.5057 | 0.7554 | 0.756 |
| 0.4678 | 6.67 | 1600 | 0.5125 | 0.7494 | 0.751 |
| 0.4659 | 7.5 | 1800 | 0.4952 | 0.7660 | 0.766 |
| 0.459 | 8.33 | 2000 | 0.5023 | 0.7564 | 0.757 |
| 0.4632 | 9.17 | 2200 | 0.4981 | 0.7615 | 0.762 |
| 0.4572 | 10.0 | 2400 | 0.5176 | 0.7485 | 0.751 |
| 0.456 | 10.83 | 2600 | 0.5159 | 0.7429 | 0.746 |
| 0.457 | 11.67 | 2800 | 0.5063 | 0.7481 | 0.75 |
| 0.4461 | 12.5 | 3000 | 0.4999 | 0.7557 | 0.757 |
| 0.456 | 13.33 | 3200 | 0.5034 | 0.7363 | 0.739 |
| 0.4491 | 14.17 | 3400 | 0.4857 | 0.7588 | 0.759 |
| 0.4467 | 15.0 | 3600 | 0.5032 | 0.7476 | 0.75 |
| 0.4443 | 15.83 | 3800 | 0.4992 | 0.7478 | 0.75 |
| 0.446 | 16.67 | 4000 | 0.5010 | 0.7455 | 0.748 |
| 0.4396 | 17.5 | 4200 | 0.5046 | 0.7456 | 0.748 |
| 0.4384 | 18.33 | 4400 | 0.5007 | 0.7535 | 0.755 |
| 0.4406 | 19.17 | 4600 | 0.5021 | 0.7483 | 0.75 |
| 0.4418 | 20.0 | 4800 | 0.4931 | 0.7513 | 0.753 |
| 0.4385 | 20.83 | 5000 | 0.4958 | 0.7564 | 0.758 |
| 0.436 | 21.67 | 5200 | 0.4934 | 0.7572 | 0.759 |
| 0.432 | 22.5 | 5400 | 0.4955 | 0.7572 | 0.758 |
| 0.4346 | 23.33 | 5600 | 0.5005 | 0.7431 | 0.746 |
| 0.433 | 24.17 | 5800 | 0.5042 | 0.7455 | 0.749 |
| 0.433 | 25.0 | 6000 | 0.4909 | 0.7583 | 0.759 |
| 0.4292 | 25.83 | 6200 | 0.4892 | 0.7583 | 0.759 |
| 0.4289 | 26.67 | 6400 | 0.4848 | 0.7769 | 0.777 |
| 0.4325 | 27.5 | 6600 | 0.4913 | 0.7591 | 0.76 |
| 0.4285 | 28.33 | 6800 | 0.4898 | 0.7613 | 0.762 |
| 0.4287 | 29.17 | 7000 | 0.4929 | 0.7599 | 0.761 |
| 0.4281 | 30.0 | 7200 | 0.4861 | 0.7592 | 0.76 |
| 0.4261 | 30.83 | 7400 | 0.4980 | 0.7487 | 0.751 |
| 0.424 | 31.67 | 7600 | 0.4926 | 0.7612 | 0.762 |
| 0.4284 | 32.5 | 7800 | 0.4902 | 0.7550 | 0.756 |
| 0.4256 | 33.33 | 8000 | 0.4937 | 0.7558 | 0.757 |
| 0.4261 | 34.17 | 8200 | 0.4939 | 0.7629 | 0.764 |
| 0.422 | 35.0 | 8400 | 0.4920 | 0.7559 | 0.757 |
| 0.4241 | 35.83 | 8600 | 0.4917 | 0.7580 | 0.759 |
| 0.4226 | 36.67 | 8800 | 0.4958 | 0.7505 | 0.752 |
| 0.4193 | 37.5 | 9000 | 0.5064 | 0.7410 | 0.744 |
| 0.4206 | 38.33 | 9200 | 0.4976 | 0.7538 | 0.755 |
| 0.4194 | 39.17 | 9400 | 0.4972 | 0.7547 | 0.756 |
| 0.4216 | 40.0 | 9600 | 0.4976 | 0.7514 | 0.753 |
| 0.42 | 40.83 | 9800 | 0.4927 | 0.7592 | 0.76 |
| 0.4225 | 41.67 | 10000 | 0.4951 | 0.7559 | 0.757 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_32768_512_43M", "model-index": [{"name": "GUE_tf_1-seqsight_32768_512_43M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_tf_1-seqsight_32768_512_43M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_32768_512_43M",
"region:us"
] | null | 2024-05-03T14:42:04+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us
| GUE\_tf\_1-seqsight\_32768\_512\_43M-L8\_f
==========================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_32768\_512\_43M on the mahdibaghbanzadeh/GUE\_tf\_1 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3159
* F1 Score: 0.8648
* Accuracy: 0.865
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
43,
100,
5,
52
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | null | ASR+Diarization handler that works natively with Inference Endpoints.
Example payload:
```python
import base64
import requests
API_URL = "<your endpoint URL>"
filepath = "/path/to/audio"
with open(filepath, 'rb') as f:
audio_encoded = base64.b64encode(f.read()).decode("utf-8")
data = {
"inputs": audio_encoded,
"parameters": {
"batch_size": 24
}
}
resp = requests.post(API_URL, json=data, headers={"Authorization": "Bearer <your token>"})
print(resp.json())
``` | {} | unclecode/asrdiarization-handler | null | [
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T14:42:42+00:00 | [] | [] | TAGS
#endpoints_compatible #region-us
| ASR+Diarization handler that works natively with Inference Endpoints.
Example payload:
| [] | [
"TAGS\n#endpoints_compatible #region-us \n"
] | [
10
] | [
"TAGS\n#endpoints_compatible #region-us \n"
] |
text-generation | transformers |
# Anifu-L3-8B-64k
Anifu-L3-8B-64k is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [ChaoticNeutrals/Poppy_Porpoise-v0.7-L3-8B](https://huggingface.co/ChaoticNeutrals/Poppy_Porpoise-v0.7-L3-8B)
* [MaziyarPanahi/Llama-3-8B-Instruct-64k](https://huggingface.co/MaziyarPanahi/Llama-3-8B-Instruct-64k)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: ChaoticNeutrals/Poppy_Porpoise-v0.7-L3-8B
layer_range: [0, 32]
- model: MaziyarPanahi/Llama-3-8B-Instruct-64k
layer_range: [0, 32]
merge_method: slerp
base_model: ChaoticNeutrals/Poppy_Porpoise-v0.7-L3-8B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.4
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Darkknight6742/Anifu-L3-8B-64k"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` | {"tags": ["merge", "mergekit", "lazymergekit", "ChaoticNeutrals/Poppy_Porpoise-v0.7-L3-8B", "MaziyarPanahi/Llama-3-8B-Instruct-64k"], "base_model": ["ChaoticNeutrals/Poppy_Porpoise-v0.7-L3-8B", "MaziyarPanahi/Llama-3-8B-Instruct-64k"]} | Darkknight6742/Anifu-L3-8B-64k | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"ChaoticNeutrals/Poppy_Porpoise-v0.7-L3-8B",
"MaziyarPanahi/Llama-3-8B-Instruct-64k",
"conversational",
"base_model:ChaoticNeutrals/Poppy_Porpoise-v0.7-L3-8B",
"base_model:MaziyarPanahi/Llama-3-8B-Instruct-64k",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-03T14:44:18+00:00 | [] | [] | TAGS
#transformers #safetensors #llama #text-generation #merge #mergekit #lazymergekit #ChaoticNeutrals/Poppy_Porpoise-v0.7-L3-8B #MaziyarPanahi/Llama-3-8B-Instruct-64k #conversational #base_model-ChaoticNeutrals/Poppy_Porpoise-v0.7-L3-8B #base_model-MaziyarPanahi/Llama-3-8B-Instruct-64k #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Anifu-L3-8B-64k
Anifu-L3-8B-64k is a merge of the following models using LazyMergekit:
* ChaoticNeutrals/Poppy_Porpoise-v0.7-L3-8B
* MaziyarPanahi/Llama-3-8B-Instruct-64k
## Configuration
## Usage
| [
"# Anifu-L3-8B-64k\n\nAnifu-L3-8B-64k is a merge of the following models using LazyMergekit:\n* ChaoticNeutrals/Poppy_Porpoise-v0.7-L3-8B\n* MaziyarPanahi/Llama-3-8B-Instruct-64k",
"## Configuration",
"## Usage"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #merge #mergekit #lazymergekit #ChaoticNeutrals/Poppy_Porpoise-v0.7-L3-8B #MaziyarPanahi/Llama-3-8B-Instruct-64k #conversational #base_model-ChaoticNeutrals/Poppy_Porpoise-v0.7-L3-8B #base_model-MaziyarPanahi/Llama-3-8B-Instruct-64k #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Anifu-L3-8B-64k\n\nAnifu-L3-8B-64k is a merge of the following models using LazyMergekit:\n* ChaoticNeutrals/Poppy_Porpoise-v0.7-L3-8B\n* MaziyarPanahi/Llama-3-8B-Instruct-64k",
"## Configuration",
"## Usage"
] | [
143,
82,
3,
3
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #merge #mergekit #lazymergekit #ChaoticNeutrals/Poppy_Porpoise-v0.7-L3-8B #MaziyarPanahi/Llama-3-8B-Instruct-64k #conversational #base_model-ChaoticNeutrals/Poppy_Porpoise-v0.7-L3-8B #base_model-MaziyarPanahi/Llama-3-8B-Instruct-64k #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Anifu-L3-8B-64k\n\nAnifu-L3-8B-64k is a merge of the following models using LazyMergekit:\n* ChaoticNeutrals/Poppy_Porpoise-v0.7-L3-8B\n* MaziyarPanahi/Llama-3-8B-Instruct-64k## Configuration## Usage"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | ferrazzipietro/LS_Llama-2-7b-hf_adapters_en.layer1_NoQuant_16_32_0.05_2_0.0002 | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T14:44:34+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
26,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [Meta-llama](https://huggingface.co/meta-llama/)
- **Model type:** [Llamm-03](https://huggingface.co/meta-llama/Meta-Llama-3-8B)
- **Language(s) (NLP):** en,spa,bn
- **License:** [Meta/llama-3-8b](https://huggingface.co/meta-llama/Meta-Llama-3-8B)
- **Finetuned from model [optional]:** [Click here](https://huggingface.co/meta-llama/Meta-Llama-3-8B)
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
**As same as Llama-3
### Direct Use
| {"language": ["en"], "license": "mit", "library_name": "transformers", "tags": ["chemistry"]} | ar08/Llama-3-1.7B | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"chemistry",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-03T14:46:03+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #llama #text-generation #chemistry #en #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by: Meta-llama
- Model type: Llamm-03
- Language(s) (NLP): en,spa,bn
- License: Meta/llama-3-8b
- Finetuned from model [optional]: Click here
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
As same as Llama-3
### Direct Use
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: Meta-llama\n\n\n- Model type: Llamm-03\n- Language(s) (NLP): en,spa,bn\n- License: Meta/llama-3-8b\n- Finetuned from model [optional]: Click here",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses\n\n\nAs same as Llama-3",
"### Direct Use"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #chemistry #en #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: Meta-llama\n\n\n- Model type: Llamm-03\n- Language(s) (NLP): en,spa,bn\n- License: Meta/llama-3-8b\n- Finetuned from model [optional]: Click here",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses\n\n\nAs same as Llama-3",
"### Direct Use"
] | [
42,
6,
4,
86,
23,
10,
5
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #chemistry #en #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: Meta-llama\n\n\n- Model type: Llamm-03\n- Language(s) (NLP): en,spa,bn\n- License: Meta/llama-3-8b\n- Finetuned from model [optional]: Click here### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses\n\n\nAs same as Llama-3### Direct Use"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# robust_llm_pythia-70m_niki-041a_imdb_random-token-1280_10-rounds_seed-3
This model is a fine-tuned version of [EleutherAI/pythia-70m](https://huggingface.co/EleutherAI/pythia-70m) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 3
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "EleutherAI/pythia-70m", "model-index": [{"name": "robust_llm_pythia-70m_niki-041a_imdb_random-token-1280_10-rounds_seed-3", "results": []}]} | AlignmentResearch/robust_llm_pythia-70m_niki-041a_imdb_random-token-1280_10-rounds_seed-3 | null | [
"transformers",
"tensorboard",
"safetensors",
"gpt_neox",
"text-classification",
"generated_from_trainer",
"base_model:EleutherAI/pythia-70m",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-03T14:50:05+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #gpt_neox #text-classification #generated_from_trainer #base_model-EleutherAI/pythia-70m #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# robust_llm_pythia-70m_niki-041a_imdb_random-token-1280_10-rounds_seed-3
This model is a fine-tuned version of EleutherAI/pythia-70m on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 3
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1
- Datasets 2.18.0
- Tokenizers 0.15.2
| [
"# robust_llm_pythia-70m_niki-041a_imdb_random-token-1280_10-rounds_seed-3\n\nThis model is a fine-tuned version of EleutherAI/pythia-70m on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 3\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.1\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #gpt_neox #text-classification #generated_from_trainer #base_model-EleutherAI/pythia-70m #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# robust_llm_pythia-70m_niki-041a_imdb_random-token-1280_10-rounds_seed-3\n\nThis model is a fine-tuned version of EleutherAI/pythia-70m on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 3\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.1\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] | [
70,
63,
7,
9,
9,
4,
93,
5,
40
] | [
"TAGS\n#transformers #tensorboard #safetensors #gpt_neox #text-classification #generated_from_trainer #base_model-EleutherAI/pythia-70m #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# robust_llm_pythia-70m_niki-041a_imdb_random-token-1280_10-rounds_seed-3\n\nThis model is a fine-tuned version of EleutherAI/pythia-70m on an unknown dataset.## Model description\n\nMore information needed## Intended uses & limitations\n\nMore information needed## Training and evaluation data\n\nMore information needed## Training procedure### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 3\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1### Training results### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.1\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_tf_1-seqsight_32768_512_43M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_32768_512_43M](https://huggingface.co/mahdibaghbanzadeh/seqsight_32768_512_43M) on the [mahdibaghbanzadeh/GUE_tf_1](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_1) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3299
- F1 Score: 0.8650
- Accuracy: 0.865
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5426 | 0.83 | 200 | 0.5207 | 0.7548 | 0.755 |
| 0.4898 | 1.67 | 400 | 0.5121 | 0.7529 | 0.753 |
| 0.4795 | 2.5 | 600 | 0.5107 | 0.7487 | 0.749 |
| 0.4729 | 3.33 | 800 | 0.4987 | 0.7531 | 0.754 |
| 0.4672 | 4.17 | 1000 | 0.5107 | 0.7475 | 0.75 |
| 0.4584 | 5.0 | 1200 | 0.5061 | 0.7479 | 0.75 |
| 0.4542 | 5.83 | 1400 | 0.4856 | 0.7618 | 0.762 |
| 0.4532 | 6.67 | 1600 | 0.5012 | 0.7458 | 0.748 |
| 0.4497 | 7.5 | 1800 | 0.4812 | 0.7560 | 0.756 |
| 0.44 | 8.33 | 2000 | 0.4899 | 0.7664 | 0.767 |
| 0.4437 | 9.17 | 2200 | 0.4879 | 0.7674 | 0.768 |
| 0.4366 | 10.0 | 2400 | 0.5086 | 0.7505 | 0.753 |
| 0.4342 | 10.83 | 2600 | 0.5080 | 0.7504 | 0.754 |
| 0.4328 | 11.67 | 2800 | 0.4901 | 0.7601 | 0.762 |
| 0.4214 | 12.5 | 3000 | 0.4984 | 0.7576 | 0.759 |
| 0.4301 | 13.33 | 3200 | 0.4965 | 0.7526 | 0.754 |
| 0.4209 | 14.17 | 3400 | 0.4845 | 0.7678 | 0.768 |
| 0.419 | 15.0 | 3600 | 0.4970 | 0.7512 | 0.753 |
| 0.4128 | 15.83 | 3800 | 0.5032 | 0.7519 | 0.754 |
| 0.4134 | 16.67 | 4000 | 0.4962 | 0.7599 | 0.761 |
| 0.4069 | 17.5 | 4200 | 0.5017 | 0.7547 | 0.757 |
| 0.4046 | 18.33 | 4400 | 0.5081 | 0.7597 | 0.761 |
| 0.4047 | 19.17 | 4600 | 0.5207 | 0.7535 | 0.756 |
| 0.4058 | 20.0 | 4800 | 0.4888 | 0.7605 | 0.761 |
| 0.3997 | 20.83 | 5000 | 0.5040 | 0.7511 | 0.753 |
| 0.3948 | 21.67 | 5200 | 0.5080 | 0.7520 | 0.754 |
| 0.39 | 22.5 | 5400 | 0.5293 | 0.7544 | 0.756 |
| 0.3894 | 23.33 | 5600 | 0.5430 | 0.7407 | 0.745 |
| 0.391 | 24.17 | 5800 | 0.5250 | 0.7473 | 0.751 |
| 0.3871 | 25.0 | 6000 | 0.4991 | 0.7573 | 0.758 |
| 0.383 | 25.83 | 6200 | 0.5037 | 0.7620 | 0.763 |
| 0.3816 | 26.67 | 6400 | 0.4972 | 0.7696 | 0.77 |
| 0.3823 | 27.5 | 6600 | 0.5181 | 0.7692 | 0.77 |
| 0.3758 | 28.33 | 6800 | 0.5215 | 0.7571 | 0.758 |
| 0.3744 | 29.17 | 7000 | 0.5173 | 0.7549 | 0.756 |
| 0.3753 | 30.0 | 7200 | 0.5160 | 0.7581 | 0.759 |
| 0.3718 | 30.83 | 7400 | 0.5256 | 0.7541 | 0.756 |
| 0.3693 | 31.67 | 7600 | 0.5339 | 0.7508 | 0.752 |
| 0.3713 | 32.5 | 7800 | 0.5280 | 0.7515 | 0.753 |
| 0.3659 | 33.33 | 8000 | 0.5400 | 0.7570 | 0.759 |
| 0.3684 | 34.17 | 8200 | 0.5305 | 0.7573 | 0.759 |
| 0.3639 | 35.0 | 8400 | 0.5285 | 0.7558 | 0.757 |
| 0.3635 | 35.83 | 8600 | 0.5302 | 0.7504 | 0.752 |
| 0.3591 | 36.67 | 8800 | 0.5316 | 0.7483 | 0.75 |
| 0.3574 | 37.5 | 9000 | 0.5520 | 0.7394 | 0.743 |
| 0.36 | 38.33 | 9200 | 0.5386 | 0.7572 | 0.759 |
| 0.3564 | 39.17 | 9400 | 0.5440 | 0.7563 | 0.758 |
| 0.3586 | 40.0 | 9600 | 0.5405 | 0.7541 | 0.756 |
| 0.3562 | 40.83 | 9800 | 0.5336 | 0.7535 | 0.755 |
| 0.3582 | 41.67 | 10000 | 0.5357 | 0.7563 | 0.758 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_32768_512_43M", "model-index": [{"name": "GUE_tf_1-seqsight_32768_512_43M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_tf_1-seqsight_32768_512_43M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_32768_512_43M",
"region:us"
] | null | 2024-05-03T14:50:41+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us
| GUE\_tf\_1-seqsight\_32768\_512\_43M-L32\_f
===========================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_32768\_512\_43M on the mahdibaghbanzadeh/GUE\_tf\_1 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3299
* F1 Score: 0.8650
* Accuracy: 0.865
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
43,
100,
5,
52
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_tf_4-seqsight_32768_512_43M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_32768_512_43M](https://huggingface.co/mahdibaghbanzadeh/seqsight_32768_512_43M) on the [mahdibaghbanzadeh/GUE_tf_4](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_4) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3525
- F1 Score: 0.8409
- Accuracy: 0.841
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5652 | 1.34 | 200 | 0.5184 | 0.7476 | 0.748 |
| 0.4877 | 2.68 | 400 | 0.5041 | 0.7616 | 0.762 |
| 0.4799 | 4.03 | 600 | 0.4936 | 0.7639 | 0.764 |
| 0.4705 | 5.37 | 800 | 0.5092 | 0.7592 | 0.761 |
| 0.4644 | 6.71 | 1000 | 0.4823 | 0.7739 | 0.774 |
| 0.4586 | 8.05 | 1200 | 0.4981 | 0.7600 | 0.762 |
| 0.4561 | 9.4 | 1400 | 0.4832 | 0.7671 | 0.768 |
| 0.4535 | 10.74 | 1600 | 0.4726 | 0.7828 | 0.783 |
| 0.4457 | 12.08 | 1800 | 0.4701 | 0.7740 | 0.774 |
| 0.4456 | 13.42 | 2000 | 0.4692 | 0.7723 | 0.773 |
| 0.4359 | 14.77 | 2200 | 0.4918 | 0.7597 | 0.762 |
| 0.4351 | 16.11 | 2400 | 0.4658 | 0.7827 | 0.783 |
| 0.4278 | 17.45 | 2600 | 0.4864 | 0.7612 | 0.763 |
| 0.43 | 18.79 | 2800 | 0.4717 | 0.7740 | 0.775 |
| 0.4299 | 20.13 | 3000 | 0.4732 | 0.7739 | 0.775 |
| 0.4232 | 21.48 | 3200 | 0.4721 | 0.7731 | 0.774 |
| 0.4235 | 22.82 | 3400 | 0.4691 | 0.7828 | 0.783 |
| 0.4209 | 24.16 | 3600 | 0.4699 | 0.7792 | 0.78 |
| 0.4215 | 25.5 | 3800 | 0.4663 | 0.7866 | 0.787 |
| 0.4187 | 26.85 | 4000 | 0.4742 | 0.7740 | 0.775 |
| 0.4209 | 28.19 | 4200 | 0.4767 | 0.7686 | 0.77 |
| 0.4122 | 29.53 | 4400 | 0.4799 | 0.7709 | 0.772 |
| 0.4148 | 30.87 | 4600 | 0.4647 | 0.7844 | 0.785 |
| 0.4128 | 32.21 | 4800 | 0.4668 | 0.7825 | 0.783 |
| 0.41 | 33.56 | 5000 | 0.4730 | 0.7845 | 0.785 |
| 0.4098 | 34.9 | 5200 | 0.4762 | 0.7771 | 0.778 |
| 0.4145 | 36.24 | 5400 | 0.4719 | 0.7718 | 0.773 |
| 0.4083 | 37.58 | 5600 | 0.4733 | 0.7811 | 0.782 |
| 0.4074 | 38.93 | 5800 | 0.4723 | 0.7812 | 0.782 |
| 0.4062 | 40.27 | 6000 | 0.4799 | 0.7729 | 0.774 |
| 0.4069 | 41.61 | 6200 | 0.4714 | 0.7782 | 0.779 |
| 0.4104 | 42.95 | 6400 | 0.4786 | 0.7704 | 0.772 |
| 0.4065 | 44.3 | 6600 | 0.4687 | 0.7802 | 0.781 |
| 0.4025 | 45.64 | 6800 | 0.4757 | 0.7718 | 0.773 |
| 0.4063 | 46.98 | 7000 | 0.4797 | 0.7716 | 0.773 |
| 0.4046 | 48.32 | 7200 | 0.4751 | 0.7727 | 0.774 |
| 0.4025 | 49.66 | 7400 | 0.4780 | 0.7704 | 0.772 |
| 0.4009 | 51.01 | 7600 | 0.4685 | 0.7752 | 0.776 |
| 0.4009 | 52.35 | 7800 | 0.4640 | 0.7845 | 0.785 |
| 0.3984 | 53.69 | 8000 | 0.4695 | 0.7793 | 0.78 |
| 0.4034 | 55.03 | 8200 | 0.4808 | 0.7712 | 0.773 |
| 0.3999 | 56.38 | 8400 | 0.4738 | 0.7718 | 0.773 |
| 0.403 | 57.72 | 8600 | 0.4629 | 0.7837 | 0.784 |
| 0.3985 | 59.06 | 8800 | 0.4747 | 0.7716 | 0.773 |
| 0.3983 | 60.4 | 9000 | 0.4715 | 0.7709 | 0.772 |
| 0.3984 | 61.74 | 9200 | 0.4686 | 0.7783 | 0.779 |
| 0.3964 | 63.09 | 9400 | 0.4691 | 0.7741 | 0.775 |
| 0.4005 | 64.43 | 9600 | 0.4670 | 0.7793 | 0.78 |
| 0.3999 | 65.77 | 9800 | 0.4678 | 0.7752 | 0.776 |
| 0.3968 | 67.11 | 10000 | 0.4685 | 0.7731 | 0.774 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_32768_512_43M", "model-index": [{"name": "GUE_tf_4-seqsight_32768_512_43M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_tf_4-seqsight_32768_512_43M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_32768_512_43M",
"region:us"
] | null | 2024-05-03T14:50:45+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us
| GUE\_tf\_4-seqsight\_32768\_512\_43M-L1\_f
==========================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_32768\_512\_43M on the mahdibaghbanzadeh/GUE\_tf\_4 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3525
* F1 Score: 0.8409
* Accuracy: 0.841
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
43,
100,
5,
52
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_tf_4-seqsight_32768_512_43M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_32768_512_43M](https://huggingface.co/mahdibaghbanzadeh/seqsight_32768_512_43M) on the [mahdibaghbanzadeh/GUE_tf_4](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_4) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3664
- F1 Score: 0.8440
- Accuracy: 0.844
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5395 | 1.34 | 200 | 0.5065 | 0.7604 | 0.761 |
| 0.473 | 2.68 | 400 | 0.4977 | 0.7742 | 0.775 |
| 0.4596 | 4.03 | 600 | 0.4805 | 0.7788 | 0.779 |
| 0.446 | 5.37 | 800 | 0.4969 | 0.7695 | 0.771 |
| 0.4349 | 6.71 | 1000 | 0.4629 | 0.7830 | 0.783 |
| 0.4259 | 8.05 | 1200 | 0.4758 | 0.7771 | 0.778 |
| 0.4221 | 9.4 | 1400 | 0.4721 | 0.7762 | 0.777 |
| 0.418 | 10.74 | 1600 | 0.4747 | 0.7750 | 0.776 |
| 0.4097 | 12.08 | 1800 | 0.4576 | 0.7920 | 0.792 |
| 0.4076 | 13.42 | 2000 | 0.4689 | 0.7717 | 0.773 |
| 0.3996 | 14.77 | 2200 | 0.4714 | 0.7811 | 0.782 |
| 0.3953 | 16.11 | 2400 | 0.4535 | 0.7869 | 0.787 |
| 0.3894 | 17.45 | 2600 | 0.4984 | 0.7625 | 0.765 |
| 0.3893 | 18.79 | 2800 | 0.4684 | 0.7783 | 0.779 |
| 0.3866 | 20.13 | 3000 | 0.4674 | 0.7831 | 0.784 |
| 0.3787 | 21.48 | 3200 | 0.4584 | 0.7877 | 0.788 |
| 0.3781 | 22.82 | 3400 | 0.4598 | 0.7927 | 0.793 |
| 0.37 | 24.16 | 3600 | 0.4506 | 0.7897 | 0.79 |
| 0.3713 | 25.5 | 3800 | 0.4447 | 0.7970 | 0.797 |
| 0.3674 | 26.85 | 4000 | 0.4572 | 0.7925 | 0.793 |
| 0.3667 | 28.19 | 4200 | 0.4565 | 0.7944 | 0.795 |
| 0.355 | 29.53 | 4400 | 0.4611 | 0.8008 | 0.801 |
| 0.3578 | 30.87 | 4600 | 0.4698 | 0.7824 | 0.784 |
| 0.3521 | 32.21 | 4800 | 0.4609 | 0.7994 | 0.8 |
| 0.3515 | 33.56 | 5000 | 0.4644 | 0.7924 | 0.793 |
| 0.3482 | 34.9 | 5200 | 0.4621 | 0.7974 | 0.798 |
| 0.3454 | 36.24 | 5400 | 0.4478 | 0.7977 | 0.798 |
| 0.3406 | 37.58 | 5600 | 0.4505 | 0.7986 | 0.799 |
| 0.3393 | 38.93 | 5800 | 0.4468 | 0.7996 | 0.8 |
| 0.3398 | 40.27 | 6000 | 0.4397 | 0.8089 | 0.809 |
| 0.3357 | 41.61 | 6200 | 0.4596 | 0.7963 | 0.797 |
| 0.3348 | 42.95 | 6400 | 0.4563 | 0.8005 | 0.801 |
| 0.3337 | 44.3 | 6600 | 0.4345 | 0.8039 | 0.804 |
| 0.3275 | 45.64 | 6800 | 0.4579 | 0.8004 | 0.801 |
| 0.3288 | 46.98 | 7000 | 0.4472 | 0.8006 | 0.801 |
| 0.3227 | 48.32 | 7200 | 0.4412 | 0.8078 | 0.808 |
| 0.3194 | 49.66 | 7400 | 0.4405 | 0.8098 | 0.81 |
| 0.3193 | 51.01 | 7600 | 0.4455 | 0.8118 | 0.812 |
| 0.3177 | 52.35 | 7800 | 0.4348 | 0.8109 | 0.811 |
| 0.3156 | 53.69 | 8000 | 0.4517 | 0.8016 | 0.802 |
| 0.3216 | 55.03 | 8200 | 0.4537 | 0.8034 | 0.804 |
| 0.3176 | 56.38 | 8400 | 0.4400 | 0.8129 | 0.813 |
| 0.3155 | 57.72 | 8600 | 0.4406 | 0.8098 | 0.81 |
| 0.3155 | 59.06 | 8800 | 0.4436 | 0.8067 | 0.807 |
| 0.3129 | 60.4 | 9000 | 0.4436 | 0.8108 | 0.811 |
| 0.3103 | 61.74 | 9200 | 0.4430 | 0.8129 | 0.813 |
| 0.3094 | 63.09 | 9400 | 0.4447 | 0.8088 | 0.809 |
| 0.3115 | 64.43 | 9600 | 0.4373 | 0.8069 | 0.807 |
| 0.3109 | 65.77 | 9800 | 0.4408 | 0.8119 | 0.812 |
| 0.3071 | 67.11 | 10000 | 0.4416 | 0.8108 | 0.811 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_32768_512_43M", "model-index": [{"name": "GUE_tf_4-seqsight_32768_512_43M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_tf_4-seqsight_32768_512_43M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_32768_512_43M",
"region:us"
] | null | 2024-05-03T14:51:02+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us
| GUE\_tf\_4-seqsight\_32768\_512\_43M-L8\_f
==========================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_32768\_512\_43M on the mahdibaghbanzadeh/GUE\_tf\_4 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3664
* F1 Score: 0.8440
* Accuracy: 0.844
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
43,
100,
5,
52
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_tf_4-seqsight_32768_512_43M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_32768_512_43M](https://huggingface.co/mahdibaghbanzadeh/seqsight_32768_512_43M) on the [mahdibaghbanzadeh/GUE_tf_4](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_4) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5449
- F1 Score: 0.8419
- Accuracy: 0.842
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5249 | 1.34 | 200 | 0.5003 | 0.7613 | 0.762 |
| 0.4582 | 2.68 | 400 | 0.5002 | 0.7683 | 0.77 |
| 0.4361 | 4.03 | 600 | 0.4676 | 0.7870 | 0.787 |
| 0.4227 | 5.37 | 800 | 0.4979 | 0.7600 | 0.762 |
| 0.4098 | 6.71 | 1000 | 0.4522 | 0.7960 | 0.796 |
| 0.3975 | 8.05 | 1200 | 0.4612 | 0.7835 | 0.784 |
| 0.3877 | 9.4 | 1400 | 0.4564 | 0.7857 | 0.786 |
| 0.3782 | 10.74 | 1600 | 0.4664 | 0.7852 | 0.786 |
| 0.3651 | 12.08 | 1800 | 0.4446 | 0.8029 | 0.803 |
| 0.3533 | 13.42 | 2000 | 0.4927 | 0.7778 | 0.78 |
| 0.3384 | 14.77 | 2200 | 0.4619 | 0.7994 | 0.8 |
| 0.3297 | 16.11 | 2400 | 0.4501 | 0.8110 | 0.811 |
| 0.3142 | 17.45 | 2600 | 0.4830 | 0.7909 | 0.792 |
| 0.3075 | 18.79 | 2800 | 0.4490 | 0.7987 | 0.799 |
| 0.2974 | 20.13 | 3000 | 0.4462 | 0.8067 | 0.807 |
| 0.2863 | 21.48 | 3200 | 0.4345 | 0.8190 | 0.819 |
| 0.2774 | 22.82 | 3400 | 0.4409 | 0.822 | 0.822 |
| 0.2675 | 24.16 | 3600 | 0.4405 | 0.8168 | 0.817 |
| 0.2601 | 25.5 | 3800 | 0.4492 | 0.8219 | 0.822 |
| 0.2509 | 26.85 | 4000 | 0.4498 | 0.8169 | 0.817 |
| 0.2468 | 28.19 | 4200 | 0.4628 | 0.8147 | 0.815 |
| 0.2333 | 29.53 | 4400 | 0.4515 | 0.8390 | 0.839 |
| 0.2304 | 30.87 | 4600 | 0.4937 | 0.8082 | 0.809 |
| 0.2176 | 32.21 | 4800 | 0.4734 | 0.8269 | 0.827 |
| 0.2179 | 33.56 | 5000 | 0.4485 | 0.8330 | 0.833 |
| 0.2091 | 34.9 | 5200 | 0.4607 | 0.8230 | 0.823 |
| 0.2066 | 36.24 | 5400 | 0.4538 | 0.8350 | 0.835 |
| 0.1927 | 37.58 | 5600 | 0.4678 | 0.8349 | 0.835 |
| 0.1921 | 38.93 | 5800 | 0.4629 | 0.842 | 0.842 |
| 0.1926 | 40.27 | 6000 | 0.4551 | 0.8479 | 0.848 |
| 0.1822 | 41.61 | 6200 | 0.4667 | 0.8530 | 0.853 |
| 0.1803 | 42.95 | 6400 | 0.4500 | 0.8510 | 0.851 |
| 0.1806 | 44.3 | 6600 | 0.4580 | 0.8509 | 0.851 |
| 0.1754 | 45.64 | 6800 | 0.4692 | 0.8500 | 0.85 |
| 0.1735 | 46.98 | 7000 | 0.4669 | 0.852 | 0.852 |
| 0.1623 | 48.32 | 7200 | 0.4765 | 0.8489 | 0.849 |
| 0.1588 | 49.66 | 7400 | 0.4864 | 0.8529 | 0.853 |
| 0.1613 | 51.01 | 7600 | 0.4871 | 0.8480 | 0.848 |
| 0.1537 | 52.35 | 7800 | 0.4830 | 0.8549 | 0.855 |
| 0.1541 | 53.69 | 8000 | 0.4832 | 0.8490 | 0.849 |
| 0.1551 | 55.03 | 8200 | 0.4792 | 0.8580 | 0.858 |
| 0.1497 | 56.38 | 8400 | 0.4938 | 0.86 | 0.86 |
| 0.1463 | 57.72 | 8600 | 0.4925 | 0.8610 | 0.861 |
| 0.1466 | 59.06 | 8800 | 0.4842 | 0.8619 | 0.862 |
| 0.148 | 60.4 | 9000 | 0.4896 | 0.8560 | 0.856 |
| 0.1443 | 61.74 | 9200 | 0.4828 | 0.8619 | 0.862 |
| 0.1419 | 63.09 | 9400 | 0.4857 | 0.8610 | 0.861 |
| 0.1434 | 64.43 | 9600 | 0.4859 | 0.8620 | 0.862 |
| 0.1379 | 65.77 | 9800 | 0.4873 | 0.8620 | 0.862 |
| 0.1406 | 67.11 | 10000 | 0.4871 | 0.8630 | 0.863 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_32768_512_43M", "model-index": [{"name": "GUE_tf_4-seqsight_32768_512_43M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_tf_4-seqsight_32768_512_43M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_32768_512_43M",
"region:us"
] | null | 2024-05-03T14:51:44+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us
| GUE\_tf\_4-seqsight\_32768\_512\_43M-L32\_f
===========================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_32768\_512\_43M on the mahdibaghbanzadeh/GUE\_tf\_4 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5449
* F1 Score: 0.8419
* Accuracy: 0.842
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
43,
100,
5,
52
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_tf_3-seqsight_32768_512_43M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_32768_512_43M](https://huggingface.co/mahdibaghbanzadeh/seqsight_32768_512_43M) on the [mahdibaghbanzadeh/GUE_tf_3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_3) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5590
- F1 Score: 0.7071
- Accuracy: 0.709
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.6424 | 0.93 | 200 | 0.5880 | 0.6796 | 0.68 |
| 0.6058 | 1.87 | 400 | 0.5724 | 0.6941 | 0.694 |
| 0.5961 | 2.8 | 600 | 0.5619 | 0.6998 | 0.701 |
| 0.5911 | 3.74 | 800 | 0.5639 | 0.7028 | 0.703 |
| 0.5891 | 4.67 | 1000 | 0.5618 | 0.6999 | 0.7 |
| 0.584 | 5.61 | 1200 | 0.5622 | 0.7 | 0.7 |
| 0.5803 | 6.54 | 1400 | 0.5561 | 0.7039 | 0.704 |
| 0.5807 | 7.48 | 1600 | 0.5620 | 0.7050 | 0.705 |
| 0.5772 | 8.41 | 1800 | 0.5579 | 0.7001 | 0.7 |
| 0.58 | 9.35 | 2000 | 0.5559 | 0.7091 | 0.709 |
| 0.5729 | 10.28 | 2200 | 0.5700 | 0.6928 | 0.694 |
| 0.5733 | 11.21 | 2400 | 0.5502 | 0.7209 | 0.721 |
| 0.574 | 12.15 | 2600 | 0.5446 | 0.7208 | 0.721 |
| 0.5713 | 13.08 | 2800 | 0.5433 | 0.7225 | 0.723 |
| 0.5699 | 14.02 | 3000 | 0.5481 | 0.7130 | 0.713 |
| 0.5687 | 14.95 | 3200 | 0.5477 | 0.7111 | 0.711 |
| 0.5689 | 15.89 | 3400 | 0.5481 | 0.7110 | 0.711 |
| 0.5663 | 16.82 | 3600 | 0.5499 | 0.7101 | 0.71 |
| 0.5651 | 17.76 | 3800 | 0.5483 | 0.7111 | 0.711 |
| 0.5683 | 18.69 | 4000 | 0.5518 | 0.7021 | 0.702 |
| 0.5621 | 19.63 | 4200 | 0.5400 | 0.7168 | 0.718 |
| 0.5659 | 20.56 | 4400 | 0.5438 | 0.7139 | 0.714 |
| 0.5636 | 21.5 | 4600 | 0.5618 | 0.7047 | 0.706 |
| 0.5607 | 22.43 | 4800 | 0.5446 | 0.7109 | 0.711 |
| 0.563 | 23.36 | 5000 | 0.5546 | 0.7046 | 0.705 |
| 0.5603 | 24.3 | 5200 | 0.5635 | 0.7095 | 0.711 |
| 0.5587 | 25.23 | 5400 | 0.5438 | 0.7117 | 0.712 |
| 0.5634 | 26.17 | 5600 | 0.5475 | 0.7121 | 0.712 |
| 0.5562 | 27.1 | 5800 | 0.5424 | 0.7159 | 0.716 |
| 0.5581 | 28.04 | 6000 | 0.5470 | 0.7161 | 0.716 |
| 0.5576 | 28.97 | 6200 | 0.5540 | 0.7107 | 0.711 |
| 0.5576 | 29.91 | 6400 | 0.5485 | 0.7181 | 0.718 |
| 0.5567 | 30.84 | 6600 | 0.5466 | 0.7191 | 0.719 |
| 0.557 | 31.78 | 6800 | 0.5508 | 0.7119 | 0.712 |
| 0.5539 | 32.71 | 7000 | 0.5468 | 0.7171 | 0.717 |
| 0.5608 | 33.64 | 7200 | 0.5444 | 0.7100 | 0.71 |
| 0.5512 | 34.58 | 7400 | 0.5589 | 0.7116 | 0.713 |
| 0.5578 | 35.51 | 7600 | 0.5512 | 0.7187 | 0.719 |
| 0.5569 | 36.45 | 7800 | 0.5495 | 0.7130 | 0.713 |
| 0.5562 | 37.38 | 8000 | 0.5482 | 0.7140 | 0.714 |
| 0.5522 | 38.32 | 8200 | 0.5459 | 0.7161 | 0.716 |
| 0.5539 | 39.25 | 8400 | 0.5457 | 0.7161 | 0.716 |
| 0.5536 | 40.19 | 8600 | 0.5479 | 0.7151 | 0.715 |
| 0.5542 | 41.12 | 8800 | 0.5476 | 0.7151 | 0.715 |
| 0.5548 | 42.06 | 9000 | 0.5474 | 0.7131 | 0.713 |
| 0.5555 | 42.99 | 9200 | 0.5503 | 0.7158 | 0.716 |
| 0.5533 | 43.93 | 9400 | 0.5524 | 0.7155 | 0.716 |
| 0.5524 | 44.86 | 9600 | 0.5489 | 0.7189 | 0.719 |
| 0.5567 | 45.79 | 9800 | 0.5482 | 0.7190 | 0.719 |
| 0.551 | 46.73 | 10000 | 0.5487 | 0.7190 | 0.719 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_32768_512_43M", "model-index": [{"name": "GUE_tf_3-seqsight_32768_512_43M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_tf_3-seqsight_32768_512_43M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_32768_512_43M",
"region:us"
] | null | 2024-05-03T14:51:44+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us
| GUE\_tf\_3-seqsight\_32768\_512\_43M-L1\_f
==========================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_32768\_512\_43M on the mahdibaghbanzadeh/GUE\_tf\_3 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5590
* F1 Score: 0.7071
* Accuracy: 0.709
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
43,
100,
5,
52
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | ferrazzipietro/LS_Llama-2-7b-hf_adapters_en.layer1_NoQuant_16_32_0.05_4_5e-05 | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T14:51:54+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
26,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | golf2248/fyc6glu | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-03T14:52:22+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
47,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |
text-generation | transformers | Based on Meta-Llama-3-8b-Instruct, and is governed by Meta Llama 3 License agreement:
https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct
DPO fine tuning method using the following datasets:
- https://huggingface.co/datasets/Intel/orca_dpo_pairs
- https://huggingface.co/datasets/argilla/distilabel-math-preference-dpo
- https://huggingface.co/datasets/unalignment/toxic-dpo-v0.2
- https://huggingface.co/datasets/M4-ai/prm_dpo_pairs_cleaned
- https://huggingface.co/datasets/jondurbin/truthy-dpo-v0.1
We are happy for anyone to try it out and give some feedback and we will have the model up on https://awanllm.com on our LLM API if it is popular.
Instruct format:
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{{ system_prompt }}<|eot_id|><|start_header_id|>user<|end_header_id|>
{{ user_message_1 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
{{ model_answer_1 }}<|eot_id|><|start_header_id|>user<|end_header_id|>
{{ user_message_2 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
Quants:
FP16: https://huggingface.co/AwanLLM/Awanllm-Llama-3-8B-Instruct-DPO-v0.1
GGUF: https://huggingface.co/AwanLLM/Awanllm-Llama-3-8B-Instruct-DPO-v0.1-GGUF | {"license": "llama3"} | AwanLLM/Awanllm-Llama-3-8B-Instruct-DPO-v0.1 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-03T14:53:47+00:00 | [] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #license-llama3 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| Based on Meta-Llama-3-8b-Instruct, and is governed by Meta Llama 3 License agreement:
URL
DPO fine tuning method using the following datasets:
- URL
- URL
- URL
- URL
- URL
We are happy for anyone to try it out and give some feedback and we will have the model up on URL on our LLM API if it is popular.
Instruct format:
Quants:
FP16: URL
GGUF: URL | [] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #license-llama3 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] | [
43
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #license-llama3 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-generation | transformers | # flammenai/flammen15-gutenberg-DPO-v1-7B AWQ
- Model creator: [flammenai](https://huggingface.co/flammenai)
- Original model: [flammen15-gutenberg-DPO-v1-7B](https://huggingface.co/flammenai/flammen15-gutenberg-DPO-v1-7B)
## How to use
### Install the necessary packages
```bash
pip install --upgrade autoawq autoawq-kernels
```
### Example Python code
```python
from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer, TextStreamer
model_path = "solidrust/flammen15-gutenberg-DPO-v1-7B-AWQ"
system_message = "You are flammen15-gutenberg-DPO-v1-7B, incarnated as a powerful AI. You were created by flammenai."
# Load model
model = AutoAWQForCausalLM.from_quantized(model_path,
fuse_layers=True)
tokenizer = AutoTokenizer.from_pretrained(model_path,
trust_remote_code=True)
streamer = TextStreamer(tokenizer,
skip_prompt=True,
skip_special_tokens=True)
# Convert prompt to tokens
prompt_template = """\
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant"""
prompt = "You're standing on the surface of the Earth. "\
"You walk one mile south, one mile west and one mile north. "\
"You end up exactly where you started. Where are you?"
tokens = tokenizer(prompt_template.format(system_message=system_message,prompt=prompt),
return_tensors='pt').input_ids.cuda()
# Generate output
generation_output = model.generate(tokens,
streamer=streamer,
max_new_tokens=512)
```
### About AWQ
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.
It is supported by:
- [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ
- [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types.
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
| {"library_name": "transformers", "tags": ["4-bit", "AWQ", "text-generation", "autotrain_compatible", "endpoints_compatible"], "pipeline_tag": "text-generation", "inference": false, "quantized_by": "Suparious"} | solidrust/flammen15-gutenberg-DPO-v1-7B-AWQ | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"4-bit",
"AWQ",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-03T14:54:53+00:00 | [] | [] | TAGS
#transformers #safetensors #mistral #text-generation #4-bit #AWQ #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| # flammenai/flammen15-gutenberg-DPO-v1-7B AWQ
- Model creator: flammenai
- Original model: flammen15-gutenberg-DPO-v1-7B
## How to use
### Install the necessary packages
### Example Python code
### About AWQ
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.
It is supported by:
- Text Generation Webui - using Loader: AutoAWQ
- vLLM - version 0.2.2 or later for support for all model types.
- Hugging Face Text Generation Inference (TGI)
- Transformers version 4.35.0 and later, from any code or client that supports Transformers
- AutoAWQ - for use from Python code
| [
"# flammenai/flammen15-gutenberg-DPO-v1-7B AWQ\n\n- Model creator: flammenai\n- Original model: flammen15-gutenberg-DPO-v1-7B",
"## How to use",
"### Install the necessary packages",
"### Example Python code",
"### About AWQ\n\nAWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.\n\nAWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.\n\nIt is supported by:\n\n- Text Generation Webui - using Loader: AutoAWQ\n- vLLM - version 0.2.2 or later for support for all model types.\n- Hugging Face Text Generation Inference (TGI)\n- Transformers version 4.35.0 and later, from any code or client that supports Transformers\n- AutoAWQ - for use from Python code"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #4-bit #AWQ #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# flammenai/flammen15-gutenberg-DPO-v1-7B AWQ\n\n- Model creator: flammenai\n- Original model: flammen15-gutenberg-DPO-v1-7B",
"## How to use",
"### Install the necessary packages",
"### Example Python code",
"### About AWQ\n\nAWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.\n\nAWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.\n\nIt is supported by:\n\n- Text Generation Webui - using Loader: AutoAWQ\n- vLLM - version 0.2.2 or later for support for all model types.\n- Hugging Face Text Generation Inference (TGI)\n- Transformers version 4.35.0 and later, from any code or client that supports Transformers\n- AutoAWQ - for use from Python code"
] | [
41,
52,
5,
7,
6,
172
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #4-bit #AWQ #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# flammenai/flammen15-gutenberg-DPO-v1-7B AWQ\n\n- Model creator: flammenai\n- Original model: flammen15-gutenberg-DPO-v1-7B## How to use### Install the necessary packages### Example Python code### About AWQ\n\nAWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.\n\nAWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.\n\nIt is supported by:\n\n- Text Generation Webui - using Loader: AutoAWQ\n- vLLM - version 0.2.2 or later for support for all model types.\n- Hugging Face Text Generation Inference (TGI)\n- Transformers version 4.35.0 and later, from any code or client that supports Transformers\n- AutoAWQ - for use from Python code"
] |
text-generation | transformers | # Locutusque/Llama-3-Orca-2.0-8B AWQ
- Model creator: [Locutusque](https://huggingface.co/Locutusque)
- Original model: [Llama-3-Orca-2.0-8B](https://huggingface.co/Locutusque/Llama-3-Orca-2.0-8B)
## How to use
### Install the necessary packages
```bash
pip install --upgrade autoawq autoawq-kernels
```
### Example Python code
```python
from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer, TextStreamer
model_path = "solidrust/Llama-3-Orca-2.0-8B-AWQ"
system_message = "You are Llama-3-Orca-2.0-8B, incarnated as a powerful AI. You were created by Locutusque."
# Load model
model = AutoAWQForCausalLM.from_quantized(model_path,
fuse_layers=True)
tokenizer = AutoTokenizer.from_pretrained(model_path,
trust_remote_code=True)
streamer = TextStreamer(tokenizer,
skip_prompt=True,
skip_special_tokens=True)
# Convert prompt to tokens
prompt_template = """\
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant"""
prompt = "You're standing on the surface of the Earth. "\
"You walk one mile south, one mile west and one mile north. "\
"You end up exactly where you started. Where are you?"
tokens = tokenizer(prompt_template.format(system_message=system_message,prompt=prompt),
return_tensors='pt').input_ids.cuda()
# Generate output
generation_output = model.generate(tokens,
streamer=streamer,
max_new_tokens=512)
```
### About AWQ
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.
It is supported by:
- [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ
- [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types.
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
| {"library_name": "transformers", "tags": ["4-bit", "AWQ", "text-generation", "autotrain_compatible", "endpoints_compatible"], "pipeline_tag": "text-generation", "inference": false, "quantized_by": "Suparious"} | solidrust/Llama-3-Orca-2.0-8B-AWQ | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"4-bit",
"AWQ",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-03T14:55:54+00:00 | [] | [] | TAGS
#transformers #safetensors #llama #text-generation #4-bit #AWQ #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| # Locutusque/Llama-3-Orca-2.0-8B AWQ
- Model creator: Locutusque
- Original model: Llama-3-Orca-2.0-8B
## How to use
### Install the necessary packages
### Example Python code
### About AWQ
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.
It is supported by:
- Text Generation Webui - using Loader: AutoAWQ
- vLLM - version 0.2.2 or later for support for all model types.
- Hugging Face Text Generation Inference (TGI)
- Transformers version 4.35.0 and later, from any code or client that supports Transformers
- AutoAWQ - for use from Python code
| [
"# Locutusque/Llama-3-Orca-2.0-8B AWQ\n\n- Model creator: Locutusque\n- Original model: Llama-3-Orca-2.0-8B",
"## How to use",
"### Install the necessary packages",
"### Example Python code",
"### About AWQ\n\nAWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.\n\nAWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.\n\nIt is supported by:\n\n- Text Generation Webui - using Loader: AutoAWQ\n- vLLM - version 0.2.2 or later for support for all model types.\n- Hugging Face Text Generation Inference (TGI)\n- Transformers version 4.35.0 and later, from any code or client that supports Transformers\n- AutoAWQ - for use from Python code"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #4-bit #AWQ #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Locutusque/Llama-3-Orca-2.0-8B AWQ\n\n- Model creator: Locutusque\n- Original model: Llama-3-Orca-2.0-8B",
"## How to use",
"### Install the necessary packages",
"### Example Python code",
"### About AWQ\n\nAWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.\n\nAWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.\n\nIt is supported by:\n\n- Text Generation Webui - using Loader: AutoAWQ\n- vLLM - version 0.2.2 or later for support for all model types.\n- Hugging Face Text Generation Inference (TGI)\n- Transformers version 4.35.0 and later, from any code or client that supports Transformers\n- AutoAWQ - for use from Python code"
] | [
41,
48,
5,
7,
6,
172
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #4-bit #AWQ #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Locutusque/Llama-3-Orca-2.0-8B AWQ\n\n- Model creator: Locutusque\n- Original model: Llama-3-Orca-2.0-8B## How to use### Install the necessary packages### Example Python code### About AWQ\n\nAWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.\n\nAWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.\n\nIt is supported by:\n\n- Text Generation Webui - using Loader: AutoAWQ\n- vLLM - version 0.2.2 or later for support for all model types.\n- Hugging Face Text Generation Inference (TGI)\n- Transformers version 4.35.0 and later, from any code or client that supports Transformers\n- AutoAWQ - for use from Python code"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_tf_3-seqsight_32768_512_43M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_32768_512_43M](https://huggingface.co/mahdibaghbanzadeh/seqsight_32768_512_43M) on the [mahdibaghbanzadeh/GUE_tf_3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_3) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5497
- F1 Score: 0.7229
- Accuracy: 0.724
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.6281 | 0.93 | 200 | 0.5688 | 0.6951 | 0.695 |
| 0.5949 | 1.87 | 400 | 0.5826 | 0.6703 | 0.672 |
| 0.585 | 2.8 | 600 | 0.5572 | 0.7048 | 0.705 |
| 0.5792 | 3.74 | 800 | 0.5626 | 0.6949 | 0.695 |
| 0.5762 | 4.67 | 1000 | 0.5562 | 0.7021 | 0.702 |
| 0.5695 | 5.61 | 1200 | 0.5461 | 0.7119 | 0.712 |
| 0.5649 | 6.54 | 1400 | 0.5500 | 0.7130 | 0.713 |
| 0.5631 | 7.48 | 1600 | 0.5447 | 0.7111 | 0.711 |
| 0.5608 | 8.41 | 1800 | 0.5496 | 0.7018 | 0.702 |
| 0.5639 | 9.35 | 2000 | 0.5401 | 0.7190 | 0.719 |
| 0.5537 | 10.28 | 2200 | 0.5468 | 0.7066 | 0.707 |
| 0.5519 | 11.21 | 2400 | 0.5395 | 0.7201 | 0.72 |
| 0.5524 | 12.15 | 2600 | 0.5341 | 0.7166 | 0.717 |
| 0.5481 | 13.08 | 2800 | 0.5306 | 0.7109 | 0.712 |
| 0.5482 | 14.02 | 3000 | 0.5349 | 0.7091 | 0.709 |
| 0.5444 | 14.95 | 3200 | 0.5333 | 0.7121 | 0.712 |
| 0.5442 | 15.89 | 3400 | 0.5393 | 0.7130 | 0.713 |
| 0.5419 | 16.82 | 3600 | 0.5386 | 0.7111 | 0.711 |
| 0.5389 | 17.76 | 3800 | 0.5367 | 0.7081 | 0.708 |
| 0.5403 | 18.69 | 4000 | 0.5463 | 0.7125 | 0.713 |
| 0.535 | 19.63 | 4200 | 0.5358 | 0.7188 | 0.719 |
| 0.536 | 20.56 | 4400 | 0.5356 | 0.7230 | 0.723 |
| 0.5325 | 21.5 | 4600 | 0.5593 | 0.6884 | 0.691 |
| 0.5311 | 22.43 | 4800 | 0.5377 | 0.7141 | 0.714 |
| 0.532 | 23.36 | 5000 | 0.5556 | 0.7030 | 0.704 |
| 0.5294 | 24.3 | 5200 | 0.5668 | 0.6834 | 0.688 |
| 0.5263 | 25.23 | 5400 | 0.5383 | 0.7070 | 0.707 |
| 0.53 | 26.17 | 5600 | 0.5423 | 0.7090 | 0.709 |
| 0.5225 | 27.1 | 5800 | 0.5405 | 0.7069 | 0.707 |
| 0.5252 | 28.04 | 6000 | 0.5461 | 0.7118 | 0.712 |
| 0.5229 | 28.97 | 6200 | 0.5614 | 0.6913 | 0.693 |
| 0.5242 | 29.91 | 6400 | 0.5449 | 0.708 | 0.708 |
| 0.5212 | 30.84 | 6600 | 0.5479 | 0.7129 | 0.713 |
| 0.5196 | 31.78 | 6800 | 0.5572 | 0.7041 | 0.705 |
| 0.5169 | 32.71 | 7000 | 0.5556 | 0.7032 | 0.704 |
| 0.5224 | 33.64 | 7200 | 0.5525 | 0.7023 | 0.703 |
| 0.5148 | 34.58 | 7400 | 0.5718 | 0.6824 | 0.686 |
| 0.5208 | 35.51 | 7600 | 0.5579 | 0.6976 | 0.699 |
| 0.5163 | 36.45 | 7800 | 0.5610 | 0.7075 | 0.708 |
| 0.5177 | 37.38 | 8000 | 0.5560 | 0.7061 | 0.707 |
| 0.5112 | 38.32 | 8200 | 0.5569 | 0.7116 | 0.712 |
| 0.5159 | 39.25 | 8400 | 0.5547 | 0.7156 | 0.716 |
| 0.5124 | 40.19 | 8600 | 0.5570 | 0.7094 | 0.71 |
| 0.5146 | 41.12 | 8800 | 0.5509 | 0.7116 | 0.712 |
| 0.5156 | 42.06 | 9000 | 0.5519 | 0.7086 | 0.709 |
| 0.5127 | 42.99 | 9200 | 0.5603 | 0.6957 | 0.697 |
| 0.5122 | 43.93 | 9400 | 0.5620 | 0.6904 | 0.692 |
| 0.5124 | 44.86 | 9600 | 0.5578 | 0.7041 | 0.705 |
| 0.5137 | 45.79 | 9800 | 0.5581 | 0.7001 | 0.701 |
| 0.5091 | 46.73 | 10000 | 0.5590 | 0.7021 | 0.703 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_32768_512_43M", "model-index": [{"name": "GUE_tf_3-seqsight_32768_512_43M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_tf_3-seqsight_32768_512_43M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_32768_512_43M",
"region:us"
] | null | 2024-05-03T14:55:54+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us
| GUE\_tf\_3-seqsight\_32768\_512\_43M-L8\_f
==========================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_32768\_512\_43M on the mahdibaghbanzadeh/GUE\_tf\_3 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5497
* F1 Score: 0.7229
* Accuracy: 0.724
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
43,
100,
5,
52
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | golf2248/e8renp4 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-03T14:56:59+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
47,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | ferrazzipietro/LS_Llama-2-7b-hf_adapters_en.layer1_NoQuant_16_32_0.05_4_0.0002 | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T14:59:09+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
26,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | ar08/llama3-715m | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-03T14:59:24+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
44,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rloo_zephyr_vllm
This model is a fine-tuned version of [EleutherAI/pythia-1b-deduped](https://huggingface.co/EleutherAI/pythia-1b-deduped) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-06
- train_batch_size: 8
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "EleutherAI/pythia-1b-deduped", "model-index": [{"name": "rloo_zephyr_vllm", "results": []}]} | vwxyzjn/rloo_zephyr_vllm | null | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:EleutherAI/pythia-1b-deduped",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-03T14:59:27+00:00 | [] | [] | TAGS
#transformers #safetensors #gpt_neox #text-generation #generated_from_trainer #conversational #base_model-EleutherAI/pythia-1b-deduped #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# rloo_zephyr_vllm
This model is a fine-tuned version of EleutherAI/pythia-1b-deduped on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-06
- train_batch_size: 8
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.19.1
| [
"# rloo_zephyr_vllm\n\nThis model is a fine-tuned version of EleutherAI/pythia-1b-deduped on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-06\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 256\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0",
"### Framework versions\n\n- Transformers 4.40.1\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #safetensors #gpt_neox #text-generation #generated_from_trainer #conversational #base_model-EleutherAI/pythia-1b-deduped #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# rloo_zephyr_vllm\n\nThis model is a fine-tuned version of EleutherAI/pythia-1b-deduped on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-06\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 256\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0",
"### Framework versions\n\n- Transformers 4.40.1\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.19.1"
] | [
73,
41,
7,
9,
9,
4,
113,
44
] | [
"TAGS\n#transformers #safetensors #gpt_neox #text-generation #generated_from_trainer #conversational #base_model-EleutherAI/pythia-1b-deduped #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# rloo_zephyr_vllm\n\nThis model is a fine-tuned version of EleutherAI/pythia-1b-deduped on an unknown dataset.## Model description\n\nMore information needed## Intended uses & limitations\n\nMore information needed## Training and evaluation data\n\nMore information needed## Training procedure### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-06\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 256\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0### Framework versions\n\n- Transformers 4.40.1\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.19.1"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_tf_3-seqsight_32768_512_43M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_32768_512_43M](https://huggingface.co/mahdibaghbanzadeh/seqsight_32768_512_43M) on the [mahdibaghbanzadeh/GUE_tf_3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_3) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5390
- F1 Score: 0.7352
- Accuracy: 0.737
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.6219 | 0.93 | 200 | 0.5615 | 0.7143 | 0.715 |
| 0.5901 | 1.87 | 400 | 0.5724 | 0.6862 | 0.687 |
| 0.5791 | 2.8 | 600 | 0.5548 | 0.7050 | 0.705 |
| 0.5704 | 3.74 | 800 | 0.5660 | 0.7032 | 0.704 |
| 0.5659 | 4.67 | 1000 | 0.5491 | 0.7071 | 0.707 |
| 0.5563 | 5.61 | 1200 | 0.5409 | 0.7079 | 0.708 |
| 0.5516 | 6.54 | 1400 | 0.5450 | 0.7081 | 0.708 |
| 0.5471 | 7.48 | 1600 | 0.5324 | 0.722 | 0.722 |
| 0.5434 | 8.41 | 1800 | 0.5451 | 0.7050 | 0.705 |
| 0.5442 | 9.35 | 2000 | 0.5373 | 0.7082 | 0.709 |
| 0.5314 | 10.28 | 2200 | 0.5364 | 0.7180 | 0.718 |
| 0.5294 | 11.21 | 2400 | 0.5513 | 0.7211 | 0.721 |
| 0.5272 | 12.15 | 2600 | 0.5450 | 0.7078 | 0.709 |
| 0.5199 | 13.08 | 2800 | 0.5316 | 0.7111 | 0.714 |
| 0.5178 | 14.02 | 3000 | 0.5374 | 0.7060 | 0.706 |
| 0.5136 | 14.95 | 3200 | 0.5289 | 0.7191 | 0.719 |
| 0.5084 | 15.89 | 3400 | 0.5419 | 0.7151 | 0.715 |
| 0.5067 | 16.82 | 3600 | 0.5432 | 0.7286 | 0.729 |
| 0.5013 | 17.76 | 3800 | 0.5421 | 0.7167 | 0.717 |
| 0.4986 | 18.69 | 4000 | 0.5601 | 0.7081 | 0.709 |
| 0.4906 | 19.63 | 4200 | 0.5510 | 0.7041 | 0.704 |
| 0.4867 | 20.56 | 4400 | 0.5497 | 0.7131 | 0.713 |
| 0.4837 | 21.5 | 4600 | 0.6035 | 0.6896 | 0.692 |
| 0.4767 | 22.43 | 4800 | 0.5738 | 0.7091 | 0.709 |
| 0.4769 | 23.36 | 5000 | 0.5860 | 0.7065 | 0.707 |
| 0.4707 | 24.3 | 5200 | 0.5907 | 0.6815 | 0.685 |
| 0.4651 | 25.23 | 5400 | 0.5700 | 0.7000 | 0.7 |
| 0.4667 | 26.17 | 5600 | 0.5695 | 0.7011 | 0.701 |
| 0.4565 | 27.1 | 5800 | 0.5968 | 0.7100 | 0.71 |
| 0.4563 | 28.04 | 6000 | 0.5916 | 0.7038 | 0.704 |
| 0.4521 | 28.97 | 6200 | 0.5932 | 0.6945 | 0.695 |
| 0.4511 | 29.91 | 6400 | 0.5748 | 0.7040 | 0.704 |
| 0.446 | 30.84 | 6600 | 0.5834 | 0.7200 | 0.72 |
| 0.4417 | 31.78 | 6800 | 0.6001 | 0.7077 | 0.708 |
| 0.4397 | 32.71 | 7000 | 0.5991 | 0.7015 | 0.702 |
| 0.4423 | 33.64 | 7200 | 0.6089 | 0.6984 | 0.699 |
| 0.4296 | 34.58 | 7400 | 0.6253 | 0.6890 | 0.691 |
| 0.4363 | 35.51 | 7600 | 0.6237 | 0.6934 | 0.695 |
| 0.4296 | 36.45 | 7800 | 0.6185 | 0.6988 | 0.699 |
| 0.4321 | 37.38 | 8000 | 0.6195 | 0.6982 | 0.699 |
| 0.423 | 38.32 | 8200 | 0.6266 | 0.7006 | 0.701 |
| 0.4243 | 39.25 | 8400 | 0.6307 | 0.6997 | 0.7 |
| 0.4201 | 40.19 | 8600 | 0.6291 | 0.6941 | 0.695 |
| 0.4204 | 41.12 | 8800 | 0.6387 | 0.6984 | 0.699 |
| 0.4233 | 42.06 | 9000 | 0.6235 | 0.6913 | 0.692 |
| 0.4202 | 42.99 | 9200 | 0.6303 | 0.6957 | 0.697 |
| 0.4168 | 43.93 | 9400 | 0.6351 | 0.6938 | 0.695 |
| 0.4173 | 44.86 | 9600 | 0.6347 | 0.6950 | 0.696 |
| 0.4161 | 45.79 | 9800 | 0.6304 | 0.6944 | 0.695 |
| 0.41 | 46.73 | 10000 | 0.6344 | 0.6922 | 0.693 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_32768_512_43M", "model-index": [{"name": "GUE_tf_3-seqsight_32768_512_43M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_tf_3-seqsight_32768_512_43M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_32768_512_43M",
"region:us"
] | null | 2024-05-03T14:59:33+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us
| GUE\_tf\_3-seqsight\_32768\_512\_43M-L32\_f
===========================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_32768\_512\_43M on the mahdibaghbanzadeh/GUE\_tf\_3 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5390
* F1 Score: 0.7352
* Accuracy: 0.737
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
43,
100,
5,
52
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_tf_2-seqsight_32768_512_43M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_32768_512_43M](https://huggingface.co/mahdibaghbanzadeh/seqsight_32768_512_43M) on the [mahdibaghbanzadeh/GUE_tf_2](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_2) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4635
- F1 Score: 0.7860
- Accuracy: 0.786
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.594 | 1.34 | 200 | 0.5423 | 0.7349 | 0.736 |
| 0.5384 | 2.68 | 400 | 0.5357 | 0.7269 | 0.727 |
| 0.5284 | 4.03 | 600 | 0.5253 | 0.7360 | 0.736 |
| 0.5233 | 5.37 | 800 | 0.5274 | 0.7356 | 0.736 |
| 0.5197 | 6.71 | 1000 | 0.5153 | 0.7520 | 0.752 |
| 0.517 | 8.05 | 1200 | 0.5213 | 0.7474 | 0.748 |
| 0.5116 | 9.4 | 1400 | 0.5085 | 0.7528 | 0.753 |
| 0.5094 | 10.74 | 1600 | 0.5075 | 0.7489 | 0.749 |
| 0.5088 | 12.08 | 1800 | 0.5199 | 0.7482 | 0.749 |
| 0.5071 | 13.42 | 2000 | 0.5079 | 0.7510 | 0.751 |
| 0.5052 | 14.77 | 2200 | 0.5027 | 0.7479 | 0.748 |
| 0.4987 | 16.11 | 2400 | 0.5077 | 0.7490 | 0.749 |
| 0.5038 | 17.45 | 2600 | 0.5009 | 0.7539 | 0.754 |
| 0.4987 | 18.79 | 2800 | 0.5037 | 0.7490 | 0.749 |
| 0.495 | 20.13 | 3000 | 0.5025 | 0.7500 | 0.75 |
| 0.4972 | 21.48 | 3200 | 0.5127 | 0.7596 | 0.76 |
| 0.4962 | 22.82 | 3400 | 0.5022 | 0.75 | 0.75 |
| 0.492 | 24.16 | 3600 | 0.4972 | 0.7539 | 0.754 |
| 0.4885 | 25.5 | 3800 | 0.4980 | 0.7498 | 0.75 |
| 0.494 | 26.85 | 4000 | 0.4983 | 0.7499 | 0.75 |
| 0.4896 | 28.19 | 4200 | 0.4968 | 0.7518 | 0.752 |
| 0.4879 | 29.53 | 4400 | 0.5084 | 0.7566 | 0.757 |
| 0.4862 | 30.87 | 4600 | 0.5038 | 0.7600 | 0.76 |
| 0.4885 | 32.21 | 4800 | 0.4983 | 0.7549 | 0.755 |
| 0.4875 | 33.56 | 5000 | 0.5069 | 0.7585 | 0.759 |
| 0.4891 | 34.9 | 5200 | 0.4988 | 0.7530 | 0.753 |
| 0.482 | 36.24 | 5400 | 0.4966 | 0.7570 | 0.757 |
| 0.4855 | 37.58 | 5600 | 0.4969 | 0.7540 | 0.754 |
| 0.482 | 38.93 | 5800 | 0.4970 | 0.7489 | 0.749 |
| 0.4815 | 40.27 | 6000 | 0.4939 | 0.7489 | 0.749 |
| 0.4817 | 41.61 | 6200 | 0.4957 | 0.7450 | 0.745 |
| 0.4833 | 42.95 | 6400 | 0.4976 | 0.7530 | 0.753 |
| 0.4792 | 44.3 | 6600 | 0.4988 | 0.7540 | 0.754 |
| 0.4832 | 45.64 | 6800 | 0.4975 | 0.7550 | 0.755 |
| 0.4797 | 46.98 | 7000 | 0.4971 | 0.7510 | 0.751 |
| 0.4802 | 48.32 | 7200 | 0.4996 | 0.752 | 0.752 |
| 0.4787 | 49.66 | 7400 | 0.4962 | 0.7530 | 0.753 |
| 0.4787 | 51.01 | 7600 | 0.4968 | 0.7540 | 0.754 |
| 0.4759 | 52.35 | 7800 | 0.4963 | 0.7500 | 0.75 |
| 0.4789 | 53.69 | 8000 | 0.4952 | 0.7469 | 0.747 |
| 0.4764 | 55.03 | 8200 | 0.4966 | 0.7530 | 0.753 |
| 0.4788 | 56.38 | 8400 | 0.4981 | 0.7580 | 0.758 |
| 0.4774 | 57.72 | 8600 | 0.4964 | 0.7520 | 0.752 |
| 0.4734 | 59.06 | 8800 | 0.4972 | 0.7530 | 0.753 |
| 0.4753 | 60.4 | 9000 | 0.4982 | 0.7560 | 0.756 |
| 0.4777 | 61.74 | 9200 | 0.4955 | 0.756 | 0.756 |
| 0.4783 | 63.09 | 9400 | 0.4959 | 0.7570 | 0.757 |
| 0.4743 | 64.43 | 9600 | 0.4957 | 0.7540 | 0.754 |
| 0.4795 | 65.77 | 9800 | 0.4971 | 0.7540 | 0.754 |
| 0.4749 | 67.11 | 10000 | 0.4960 | 0.754 | 0.754 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_32768_512_43M", "model-index": [{"name": "GUE_tf_2-seqsight_32768_512_43M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_tf_2-seqsight_32768_512_43M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_32768_512_43M",
"region:us"
] | null | 2024-05-03T14:59:38+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us
| GUE\_tf\_2-seqsight\_32768\_512\_43M-L1\_f
==========================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_32768\_512\_43M on the mahdibaghbanzadeh/GUE\_tf\_2 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4635
* F1 Score: 0.7860
* Accuracy: 0.786
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
43,
100,
5,
52
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_tf_2-seqsight_32768_512_43M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_32768_512_43M](https://huggingface.co/mahdibaghbanzadeh/seqsight_32768_512_43M) on the [mahdibaghbanzadeh/GUE_tf_2](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_2) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4626
- F1 Score: 0.7890
- Accuracy: 0.789
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5751 | 1.34 | 200 | 0.5271 | 0.7484 | 0.749 |
| 0.5281 | 2.68 | 400 | 0.5243 | 0.7467 | 0.747 |
| 0.5171 | 4.03 | 600 | 0.5162 | 0.7439 | 0.744 |
| 0.5097 | 5.37 | 800 | 0.5136 | 0.7549 | 0.755 |
| 0.5055 | 6.71 | 1000 | 0.5194 | 0.7512 | 0.752 |
| 0.4997 | 8.05 | 1200 | 0.5015 | 0.7509 | 0.751 |
| 0.4922 | 9.4 | 1400 | 0.5024 | 0.7560 | 0.756 |
| 0.4886 | 10.74 | 1600 | 0.5015 | 0.7520 | 0.752 |
| 0.4882 | 12.08 | 1800 | 0.5144 | 0.7531 | 0.754 |
| 0.4815 | 13.42 | 2000 | 0.5042 | 0.7599 | 0.76 |
| 0.4818 | 14.77 | 2200 | 0.5019 | 0.7563 | 0.757 |
| 0.473 | 16.11 | 2400 | 0.5059 | 0.7570 | 0.757 |
| 0.4768 | 17.45 | 2600 | 0.4957 | 0.7639 | 0.764 |
| 0.4711 | 18.79 | 2800 | 0.5030 | 0.7637 | 0.764 |
| 0.4636 | 20.13 | 3000 | 0.5009 | 0.7679 | 0.768 |
| 0.4655 | 21.48 | 3200 | 0.5263 | 0.7501 | 0.752 |
| 0.4644 | 22.82 | 3400 | 0.5047 | 0.7608 | 0.761 |
| 0.4559 | 24.16 | 3600 | 0.4992 | 0.7618 | 0.762 |
| 0.4534 | 25.5 | 3800 | 0.5043 | 0.7608 | 0.761 |
| 0.4565 | 26.85 | 4000 | 0.4970 | 0.7640 | 0.764 |
| 0.4508 | 28.19 | 4200 | 0.5071 | 0.7624 | 0.763 |
| 0.4493 | 29.53 | 4400 | 0.5147 | 0.7642 | 0.765 |
| 0.4444 | 30.87 | 4600 | 0.5106 | 0.7583 | 0.759 |
| 0.4453 | 32.21 | 4800 | 0.5107 | 0.7586 | 0.759 |
| 0.4446 | 33.56 | 5000 | 0.5167 | 0.7614 | 0.762 |
| 0.4455 | 34.9 | 5200 | 0.5095 | 0.7535 | 0.754 |
| 0.4373 | 36.24 | 5400 | 0.5012 | 0.7590 | 0.759 |
| 0.4395 | 37.58 | 5600 | 0.5026 | 0.7478 | 0.748 |
| 0.4324 | 38.93 | 5800 | 0.5023 | 0.7590 | 0.759 |
| 0.4336 | 40.27 | 6000 | 0.4963 | 0.7510 | 0.751 |
| 0.4318 | 41.61 | 6200 | 0.5013 | 0.7559 | 0.756 |
| 0.4301 | 42.95 | 6400 | 0.5128 | 0.7493 | 0.75 |
| 0.4272 | 44.3 | 6600 | 0.5120 | 0.7537 | 0.754 |
| 0.4316 | 45.64 | 6800 | 0.5206 | 0.7540 | 0.755 |
| 0.4264 | 46.98 | 7000 | 0.5138 | 0.7538 | 0.754 |
| 0.4242 | 48.32 | 7200 | 0.5163 | 0.7551 | 0.756 |
| 0.423 | 49.66 | 7400 | 0.5117 | 0.7506 | 0.751 |
| 0.4239 | 51.01 | 7600 | 0.5220 | 0.7425 | 0.744 |
| 0.4193 | 52.35 | 7800 | 0.5163 | 0.7517 | 0.752 |
| 0.4226 | 53.69 | 8000 | 0.5121 | 0.7548 | 0.755 |
| 0.419 | 55.03 | 8200 | 0.5148 | 0.7504 | 0.751 |
| 0.4201 | 56.38 | 8400 | 0.5143 | 0.7504 | 0.751 |
| 0.4197 | 57.72 | 8600 | 0.5131 | 0.7535 | 0.754 |
| 0.4163 | 59.06 | 8800 | 0.5112 | 0.7495 | 0.75 |
| 0.4132 | 60.4 | 9000 | 0.5188 | 0.7485 | 0.749 |
| 0.4182 | 61.74 | 9200 | 0.5114 | 0.7516 | 0.752 |
| 0.4165 | 63.09 | 9400 | 0.5168 | 0.7493 | 0.75 |
| 0.4103 | 64.43 | 9600 | 0.5129 | 0.7567 | 0.757 |
| 0.4171 | 65.77 | 9800 | 0.5183 | 0.7483 | 0.749 |
| 0.4116 | 67.11 | 10000 | 0.5155 | 0.7525 | 0.753 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_32768_512_43M", "model-index": [{"name": "GUE_tf_2-seqsight_32768_512_43M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_tf_2-seqsight_32768_512_43M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_32768_512_43M",
"region:us"
] | null | 2024-05-03T15:00:14+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us
| GUE\_tf\_2-seqsight\_32768\_512\_43M-L8\_f
==========================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_32768\_512\_43M on the mahdibaghbanzadeh/GUE\_tf\_2 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4626
* F1 Score: 0.7890
* Accuracy: 0.789
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
43,
100,
5,
52
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_tf_2-seqsight_32768_512_43M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_32768_512_43M](https://huggingface.co/mahdibaghbanzadeh/seqsight_32768_512_43M) on the [mahdibaghbanzadeh/GUE_tf_2](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_2) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4622
- F1 Score: 0.7959
- Accuracy: 0.796
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5617 | 1.34 | 200 | 0.5167 | 0.7500 | 0.751 |
| 0.5206 | 2.68 | 400 | 0.5242 | 0.7367 | 0.738 |
| 0.5069 | 4.03 | 600 | 0.5101 | 0.7456 | 0.746 |
| 0.4965 | 5.37 | 800 | 0.5066 | 0.7537 | 0.754 |
| 0.4893 | 6.71 | 1000 | 0.5088 | 0.7533 | 0.754 |
| 0.4816 | 8.05 | 1200 | 0.4897 | 0.7556 | 0.756 |
| 0.4721 | 9.4 | 1400 | 0.5046 | 0.7609 | 0.761 |
| 0.4629 | 10.74 | 1600 | 0.4977 | 0.7720 | 0.772 |
| 0.4619 | 12.08 | 1800 | 0.4909 | 0.7620 | 0.762 |
| 0.4515 | 13.42 | 2000 | 0.5238 | 0.7467 | 0.748 |
| 0.447 | 14.77 | 2200 | 0.5081 | 0.7597 | 0.76 |
| 0.4363 | 16.11 | 2400 | 0.5179 | 0.7600 | 0.76 |
| 0.4342 | 17.45 | 2600 | 0.5182 | 0.7510 | 0.751 |
| 0.4217 | 18.79 | 2800 | 0.5406 | 0.7378 | 0.74 |
| 0.4136 | 20.13 | 3000 | 0.5344 | 0.7592 | 0.76 |
| 0.4089 | 21.48 | 3200 | 0.5592 | 0.7513 | 0.754 |
| 0.4026 | 22.82 | 3400 | 0.5251 | 0.7455 | 0.746 |
| 0.3905 | 24.16 | 3600 | 0.5552 | 0.7475 | 0.748 |
| 0.3842 | 25.5 | 3800 | 0.5535 | 0.7528 | 0.754 |
| 0.379 | 26.85 | 4000 | 0.5383 | 0.7499 | 0.75 |
| 0.3731 | 28.19 | 4200 | 0.5806 | 0.7401 | 0.742 |
| 0.3637 | 29.53 | 4400 | 0.5965 | 0.7487 | 0.75 |
| 0.3579 | 30.87 | 4600 | 0.5704 | 0.7394 | 0.74 |
| 0.3512 | 32.21 | 4800 | 0.6344 | 0.7407 | 0.743 |
| 0.3492 | 33.56 | 5000 | 0.6245 | 0.7389 | 0.74 |
| 0.341 | 34.9 | 5200 | 0.6164 | 0.7378 | 0.739 |
| 0.3312 | 36.24 | 5400 | 0.5966 | 0.7425 | 0.743 |
| 0.3296 | 37.58 | 5600 | 0.6205 | 0.7388 | 0.739 |
| 0.3182 | 38.93 | 5800 | 0.6105 | 0.7376 | 0.738 |
| 0.3165 | 40.27 | 6000 | 0.6069 | 0.736 | 0.736 |
| 0.3096 | 41.61 | 6200 | 0.6144 | 0.7455 | 0.746 |
| 0.309 | 42.95 | 6400 | 0.6497 | 0.7355 | 0.736 |
| 0.301 | 44.3 | 6600 | 0.6857 | 0.7393 | 0.74 |
| 0.3 | 45.64 | 6800 | 0.6950 | 0.7285 | 0.73 |
| 0.3012 | 46.98 | 7000 | 0.6517 | 0.7427 | 0.743 |
| 0.2952 | 48.32 | 7200 | 0.6615 | 0.7406 | 0.741 |
| 0.2885 | 49.66 | 7400 | 0.6701 | 0.7332 | 0.734 |
| 0.2802 | 51.01 | 7600 | 0.6974 | 0.7262 | 0.727 |
| 0.2763 | 52.35 | 7800 | 0.6962 | 0.7376 | 0.738 |
| 0.279 | 53.69 | 8000 | 0.6812 | 0.7374 | 0.738 |
| 0.2739 | 55.03 | 8200 | 0.6958 | 0.7351 | 0.736 |
| 0.2743 | 56.38 | 8400 | 0.7221 | 0.7278 | 0.729 |
| 0.2674 | 57.72 | 8600 | 0.7086 | 0.7356 | 0.736 |
| 0.2677 | 59.06 | 8800 | 0.7129 | 0.7343 | 0.735 |
| 0.2646 | 60.4 | 9000 | 0.7246 | 0.7305 | 0.731 |
| 0.2654 | 61.74 | 9200 | 0.7072 | 0.7262 | 0.727 |
| 0.2601 | 63.09 | 9400 | 0.7179 | 0.7312 | 0.732 |
| 0.258 | 64.43 | 9600 | 0.7255 | 0.7323 | 0.733 |
| 0.2621 | 65.77 | 9800 | 0.7326 | 0.7280 | 0.729 |
| 0.2603 | 67.11 | 10000 | 0.7294 | 0.7292 | 0.73 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_32768_512_43M", "model-index": [{"name": "GUE_tf_2-seqsight_32768_512_43M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_tf_2-seqsight_32768_512_43M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_32768_512_43M",
"region:us"
] | null | 2024-05-03T15:00:22+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us
| GUE\_tf\_2-seqsight\_32768\_512\_43M-L32\_f
===========================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_32768\_512\_43M on the mahdibaghbanzadeh/GUE\_tf\_2 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4622
* F1 Score: 0.7959
* Accuracy: 0.796
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
43,
100,
5,
52
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text-classification | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | quangtqv/tool_learning_cross_encoder_v3 | null | [
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T15:00:22+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #xlm-roberta #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #xlm-roberta #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
40,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#transformers #safetensors #xlm-roberta #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_virus_covid-seqsight_32768_512_43M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_32768_512_43M](https://huggingface.co/mahdibaghbanzadeh/seqsight_32768_512_43M) on the [mahdibaghbanzadeh/GUE_virus_covid](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_virus_covid) dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6354
- F1 Score: 0.3716
- Accuracy: 0.3803
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 2.1856 | 0.35 | 200 | 2.1862 | 0.0584 | 0.1231 |
| 2.1827 | 0.7 | 400 | 2.1814 | 0.0880 | 0.1368 |
| 2.1749 | 1.05 | 600 | 2.1695 | 0.1360 | 0.1556 |
| 2.1678 | 1.4 | 800 | 2.1620 | 0.1018 | 0.1578 |
| 2.1597 | 1.75 | 1000 | 2.1532 | 0.1245 | 0.1743 |
| 2.1497 | 2.09 | 1200 | 2.1463 | 0.1223 | 0.1768 |
| 2.1336 | 2.44 | 1400 | 2.1075 | 0.1712 | 0.1947 |
| 2.1006 | 2.79 | 1600 | 2.0510 | 0.1868 | 0.2228 |
| 2.0648 | 3.14 | 1800 | 2.0248 | 0.1955 | 0.2269 |
| 2.0301 | 3.49 | 2000 | 1.9859 | 0.2348 | 0.2523 |
| 2.0098 | 3.84 | 2200 | 1.9554 | 0.2532 | 0.2677 |
| 1.9882 | 4.19 | 2400 | 1.9165 | 0.2718 | 0.2842 |
| 1.9604 | 4.54 | 2600 | 1.8834 | 0.2781 | 0.2914 |
| 1.942 | 4.89 | 2800 | 1.8575 | 0.2887 | 0.3025 |
| 1.9183 | 5.24 | 3000 | 1.8389 | 0.2878 | 0.3082 |
| 1.9027 | 5.58 | 3200 | 1.8136 | 0.3004 | 0.3196 |
| 1.8868 | 5.93 | 3400 | 1.8177 | 0.2947 | 0.3131 |
| 1.8806 | 6.28 | 3600 | 1.7884 | 0.3190 | 0.3344 |
| 1.8632 | 6.63 | 3800 | 1.7800 | 0.3097 | 0.3299 |
| 1.8551 | 6.98 | 4000 | 1.7610 | 0.3201 | 0.3394 |
| 1.846 | 7.33 | 4200 | 1.7507 | 0.3170 | 0.3379 |
| 1.8396 | 7.68 | 4400 | 1.7363 | 0.3258 | 0.3391 |
| 1.8348 | 8.03 | 4600 | 1.7578 | 0.3086 | 0.3272 |
| 1.8203 | 8.38 | 4800 | 1.7280 | 0.3298 | 0.3478 |
| 1.8233 | 8.73 | 5000 | 1.7161 | 0.3312 | 0.3516 |
| 1.8117 | 9.08 | 5200 | 1.7093 | 0.3323 | 0.3429 |
| 1.8013 | 9.42 | 5400 | 1.6968 | 0.3377 | 0.3573 |
| 1.7983 | 9.77 | 5600 | 1.6937 | 0.3354 | 0.3536 |
| 1.796 | 10.12 | 5800 | 1.6863 | 0.3478 | 0.3589 |
| 1.7942 | 10.47 | 6000 | 1.6781 | 0.3521 | 0.3644 |
| 1.7879 | 10.82 | 6200 | 1.6785 | 0.3433 | 0.3540 |
| 1.7773 | 11.17 | 6400 | 1.6732 | 0.3494 | 0.3601 |
| 1.7728 | 11.52 | 6600 | 1.6701 | 0.3511 | 0.3631 |
| 1.776 | 11.87 | 6800 | 1.6609 | 0.3559 | 0.3686 |
| 1.7674 | 12.22 | 7000 | 1.6534 | 0.3625 | 0.3745 |
| 1.7588 | 12.57 | 7200 | 1.6500 | 0.3622 | 0.3756 |
| 1.7692 | 12.91 | 7400 | 1.6559 | 0.3579 | 0.3688 |
| 1.7543 | 13.26 | 7600 | 1.6547 | 0.3556 | 0.3673 |
| 1.7568 | 13.61 | 7800 | 1.6483 | 0.3649 | 0.3755 |
| 1.7573 | 13.96 | 8000 | 1.6423 | 0.3640 | 0.3756 |
| 1.7442 | 14.31 | 8200 | 1.6456 | 0.3620 | 0.3746 |
| 1.7566 | 14.66 | 8400 | 1.6388 | 0.3726 | 0.3818 |
| 1.7466 | 15.01 | 8600 | 1.6458 | 0.3579 | 0.3698 |
| 1.7529 | 15.36 | 8800 | 1.6328 | 0.3769 | 0.3863 |
| 1.7406 | 15.71 | 9000 | 1.6344 | 0.3714 | 0.3845 |
| 1.7376 | 16.06 | 9200 | 1.6312 | 0.3745 | 0.3864 |
| 1.7423 | 16.4 | 9400 | 1.6308 | 0.3732 | 0.3861 |
| 1.7429 | 16.75 | 9600 | 1.6332 | 0.3713 | 0.3826 |
| 1.7435 | 17.1 | 9800 | 1.6332 | 0.3709 | 0.3823 |
| 1.7473 | 17.45 | 10000 | 1.6315 | 0.3732 | 0.3842 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_32768_512_43M", "model-index": [{"name": "GUE_virus_covid-seqsight_32768_512_43M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_virus_covid-seqsight_32768_512_43M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_32768_512_43M",
"region:us"
] | null | 2024-05-03T15:00:45+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us
| GUE\_virus\_covid-seqsight\_32768\_512\_43M-L1\_f
=================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_32768\_512\_43M on the mahdibaghbanzadeh/GUE\_virus\_covid dataset.
It achieves the following results on the evaluation set:
* Loss: 1.6354
* F1 Score: 0.3716
* Accuracy: 0.3803
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
43,
100,
5,
52
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
feature-extraction | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | michaelbenayoun/llama-2-tiny-4kv-heads-8layers-random | null | [
"transformers",
"safetensors",
"llama",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-03T15:00:53+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #feature-extraction #arxiv-1910.09700 #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #feature-extraction #arxiv-1910.09700 #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
39,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#transformers #safetensors #llama #feature-extraction #arxiv-1910.09700 #endpoints_compatible #text-generation-inference #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_virus_covid-seqsight_32768_512_43M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_32768_512_43M](https://huggingface.co/mahdibaghbanzadeh/seqsight_32768_512_43M) on the [mahdibaghbanzadeh/GUE_virus_covid](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_virus_covid) dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2253
- F1 Score: 0.5417
- Accuracy: 0.5433
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 2.1856 | 0.35 | 200 | 2.1853 | 0.0568 | 0.1219 |
| 2.1788 | 0.7 | 400 | 2.1724 | 0.1014 | 0.1494 |
| 2.1619 | 1.05 | 600 | 2.1430 | 0.1503 | 0.1768 |
| 2.1148 | 1.4 | 800 | 2.0385 | 0.1734 | 0.2172 |
| 2.0028 | 1.75 | 1000 | 1.9046 | 0.2631 | 0.2787 |
| 1.9152 | 2.09 | 1200 | 1.8482 | 0.2655 | 0.2893 |
| 1.852 | 2.44 | 1400 | 1.7498 | 0.3254 | 0.3333 |
| 1.8092 | 2.79 | 1600 | 1.6999 | 0.3373 | 0.3568 |
| 1.7683 | 3.14 | 1800 | 1.6663 | 0.3424 | 0.3645 |
| 1.7302 | 3.49 | 2000 | 1.6432 | 0.3620 | 0.3758 |
| 1.7128 | 3.84 | 2200 | 1.6109 | 0.3820 | 0.3966 |
| 1.6798 | 4.19 | 2400 | 1.5869 | 0.3928 | 0.3989 |
| 1.659 | 4.54 | 2600 | 1.5614 | 0.3969 | 0.4073 |
| 1.6491 | 4.89 | 2800 | 1.5399 | 0.4111 | 0.4249 |
| 1.6308 | 5.24 | 3000 | 1.5227 | 0.4190 | 0.4286 |
| 1.6162 | 5.58 | 3200 | 1.5082 | 0.4266 | 0.4413 |
| 1.5969 | 5.93 | 3400 | 1.5042 | 0.4202 | 0.4340 |
| 1.5869 | 6.28 | 3600 | 1.4714 | 0.4564 | 0.4627 |
| 1.561 | 6.63 | 3800 | 1.4475 | 0.4479 | 0.4614 |
| 1.5523 | 6.98 | 4000 | 1.4304 | 0.4622 | 0.4720 |
| 1.5363 | 7.33 | 4200 | 1.4157 | 0.4687 | 0.4787 |
| 1.5188 | 7.68 | 4400 | 1.4040 | 0.4700 | 0.4752 |
| 1.5109 | 8.03 | 4600 | 1.3890 | 0.4809 | 0.4873 |
| 1.488 | 8.38 | 4800 | 1.3785 | 0.4747 | 0.4868 |
| 1.4927 | 8.73 | 5000 | 1.3663 | 0.4806 | 0.4899 |
| 1.4798 | 9.08 | 5200 | 1.3459 | 0.4998 | 0.5048 |
| 1.4619 | 9.42 | 5400 | 1.3396 | 0.4902 | 0.5014 |
| 1.4528 | 9.77 | 5600 | 1.3264 | 0.5010 | 0.5110 |
| 1.4431 | 10.12 | 5800 | 1.3171 | 0.4994 | 0.5096 |
| 1.4406 | 10.47 | 6000 | 1.3090 | 0.5127 | 0.5161 |
| 1.4252 | 10.82 | 6200 | 1.3047 | 0.5159 | 0.5148 |
| 1.4087 | 11.17 | 6400 | 1.2971 | 0.5152 | 0.5154 |
| 1.4124 | 11.52 | 6600 | 1.2890 | 0.5257 | 0.5229 |
| 1.4147 | 11.87 | 6800 | 1.2846 | 0.5164 | 0.5208 |
| 1.3961 | 12.22 | 7000 | 1.2720 | 0.5226 | 0.5232 |
| 1.3883 | 12.57 | 7200 | 1.2659 | 0.5313 | 0.5323 |
| 1.395 | 12.91 | 7400 | 1.2664 | 0.5366 | 0.5353 |
| 1.3821 | 13.26 | 7600 | 1.2610 | 0.5355 | 0.5334 |
| 1.3721 | 13.61 | 7800 | 1.2549 | 0.5389 | 0.5348 |
| 1.3735 | 13.96 | 8000 | 1.2449 | 0.5382 | 0.5410 |
| 1.3594 | 14.31 | 8200 | 1.2458 | 0.5412 | 0.5410 |
| 1.3733 | 14.66 | 8400 | 1.2368 | 0.5433 | 0.5422 |
| 1.3614 | 15.01 | 8600 | 1.2381 | 0.5445 | 0.5465 |
| 1.3687 | 15.36 | 8800 | 1.2334 | 0.5388 | 0.5410 |
| 1.3481 | 15.71 | 9000 | 1.2289 | 0.5448 | 0.5473 |
| 1.3585 | 16.06 | 9200 | 1.2274 | 0.5503 | 0.5491 |
| 1.362 | 16.4 | 9400 | 1.2240 | 0.5443 | 0.5483 |
| 1.3525 | 16.75 | 9600 | 1.2242 | 0.5502 | 0.5495 |
| 1.35 | 17.1 | 9800 | 1.2247 | 0.5491 | 0.5494 |
| 1.3605 | 17.45 | 10000 | 1.2232 | 0.5482 | 0.5488 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_32768_512_43M", "model-index": [{"name": "GUE_virus_covid-seqsight_32768_512_43M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_virus_covid-seqsight_32768_512_43M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_32768_512_43M",
"region:us"
] | null | 2024-05-03T15:01:18+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us
| GUE\_virus\_covid-seqsight\_32768\_512\_43M-L8\_f
=================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_32768\_512\_43M on the mahdibaghbanzadeh/GUE\_virus\_covid dataset.
It achieves the following results on the evaluation set:
* Loss: 1.2253
* F1 Score: 0.5417
* Accuracy: 0.5433
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
43,
100,
5,
52
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_virus_covid-seqsight_32768_512_43M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_32768_512_43M](https://huggingface.co/mahdibaghbanzadeh/seqsight_32768_512_43M) on the [mahdibaghbanzadeh/GUE_virus_covid](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_virus_covid) dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0029
- F1 Score: 0.6167
- Accuracy: 0.6178
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 2.1853 | 0.35 | 200 | 2.1805 | 0.0843 | 0.1372 |
| 2.1768 | 0.7 | 400 | 2.1641 | 0.1214 | 0.1525 |
| 2.1212 | 1.05 | 600 | 2.0259 | 0.1938 | 0.2265 |
| 1.969 | 1.4 | 800 | 1.8285 | 0.2853 | 0.3041 |
| 1.8126 | 1.75 | 1000 | 1.6823 | 0.3570 | 0.3668 |
| 1.7124 | 2.09 | 1200 | 1.5757 | 0.4073 | 0.4168 |
| 1.6288 | 2.44 | 1400 | 1.5124 | 0.4353 | 0.4345 |
| 1.5754 | 2.79 | 1600 | 1.4519 | 0.4393 | 0.4482 |
| 1.5289 | 3.14 | 1800 | 1.4078 | 0.4593 | 0.4704 |
| 1.4859 | 3.49 | 2000 | 1.3729 | 0.4826 | 0.4772 |
| 1.4627 | 3.84 | 2200 | 1.3404 | 0.4993 | 0.4986 |
| 1.4266 | 4.19 | 2400 | 1.3065 | 0.5062 | 0.5061 |
| 1.4048 | 4.54 | 2600 | 1.2738 | 0.5171 | 0.5223 |
| 1.3816 | 4.89 | 2800 | 1.2513 | 0.5304 | 0.5339 |
| 1.3649 | 5.24 | 3000 | 1.2371 | 0.5309 | 0.5336 |
| 1.3436 | 5.58 | 3200 | 1.2223 | 0.5484 | 0.5464 |
| 1.3166 | 5.93 | 3400 | 1.2165 | 0.5484 | 0.5492 |
| 1.3061 | 6.28 | 3600 | 1.1944 | 0.5550 | 0.5507 |
| 1.2792 | 6.63 | 3800 | 1.1837 | 0.5597 | 0.5550 |
| 1.2746 | 6.98 | 4000 | 1.1678 | 0.5583 | 0.5607 |
| 1.2552 | 7.33 | 4200 | 1.1544 | 0.5723 | 0.5703 |
| 1.2414 | 7.68 | 4400 | 1.1456 | 0.5710 | 0.5699 |
| 1.2377 | 8.03 | 4600 | 1.1386 | 0.5743 | 0.5709 |
| 1.2159 | 8.38 | 4800 | 1.1237 | 0.5807 | 0.5835 |
| 1.222 | 8.73 | 5000 | 1.1138 | 0.5848 | 0.5836 |
| 1.1993 | 9.08 | 5200 | 1.1315 | 0.5875 | 0.5787 |
| 1.1904 | 9.42 | 5400 | 1.0984 | 0.5867 | 0.5885 |
| 1.1684 | 9.77 | 5600 | 1.0853 | 0.5910 | 0.5879 |
| 1.1775 | 10.12 | 5800 | 1.0639 | 0.5927 | 0.5922 |
| 1.1735 | 10.47 | 6000 | 1.0625 | 0.5994 | 0.5988 |
| 1.155 | 10.82 | 6200 | 1.0571 | 0.6021 | 0.5948 |
| 1.1379 | 11.17 | 6400 | 1.0743 | 0.5951 | 0.5907 |
| 1.1367 | 11.52 | 6600 | 1.0611 | 0.6045 | 0.5984 |
| 1.1426 | 11.87 | 6800 | 1.0483 | 0.5977 | 0.5944 |
| 1.135 | 12.22 | 7000 | 1.0395 | 0.6083 | 0.6058 |
| 1.1153 | 12.57 | 7200 | 1.0375 | 0.6060 | 0.6007 |
| 1.1251 | 12.91 | 7400 | 1.0405 | 0.6050 | 0.6004 |
| 1.1104 | 13.26 | 7600 | 1.0430 | 0.6094 | 0.6007 |
| 1.1089 | 13.61 | 7800 | 1.0323 | 0.6107 | 0.6053 |
| 1.1053 | 13.96 | 8000 | 1.0236 | 0.6133 | 0.6066 |
| 1.0963 | 14.31 | 8200 | 1.0296 | 0.6088 | 0.6036 |
| 1.1049 | 14.66 | 8400 | 1.0208 | 0.6143 | 0.6091 |
| 1.0961 | 15.01 | 8600 | 1.0285 | 0.6103 | 0.6048 |
| 1.0962 | 15.36 | 8800 | 1.0178 | 0.6140 | 0.6096 |
| 1.0789 | 15.71 | 9000 | 1.0094 | 0.6166 | 0.6130 |
| 1.0905 | 16.06 | 9200 | 1.0114 | 0.6164 | 0.6116 |
| 1.0829 | 16.4 | 9400 | 1.0132 | 0.6112 | 0.6081 |
| 1.0924 | 16.75 | 9600 | 1.0119 | 0.6165 | 0.6114 |
| 1.0855 | 17.1 | 9800 | 1.0113 | 0.6156 | 0.6110 |
| 1.0875 | 17.45 | 10000 | 1.0112 | 0.6153 | 0.6107 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_32768_512_43M", "model-index": [{"name": "GUE_virus_covid-seqsight_32768_512_43M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_virus_covid-seqsight_32768_512_43M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_32768_512_43M",
"region:us"
] | null | 2024-05-03T15:01:25+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us
| GUE\_virus\_covid-seqsight\_32768\_512\_43M-L32\_f
==================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_32768\_512\_43M on the mahdibaghbanzadeh/GUE\_virus\_covid dataset.
It achieves the following results on the evaluation set:
* Loss: 1.0029
* F1 Score: 0.6167
* Accuracy: 0.6178
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
43,
100,
5,
52
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
reinforcement-learning | null |
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="jchenmath/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
| {"tags": ["FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation"], "model-index": [{"name": "q-FrozenLake-v1-4x4-noSlippery", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "FrozenLake-v1-4x4-no_slippery", "type": "FrozenLake-v1-4x4-no_slippery"}, "metrics": [{"type": "mean_reward", "value": "1.00 +/- 0.00", "name": "mean_reward", "verified": false}]}]}]} | jchenmath/q-FrozenLake-v1-4x4-noSlippery | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | null | 2024-05-03T15:02:26+00:00 | [] | [] | TAGS
#FrozenLake-v1-4x4-no_slippery #q-learning #reinforcement-learning #custom-implementation #model-index #region-us
|
# Q-Learning Agent playing1 FrozenLake-v1
This is a trained model of a Q-Learning agent playing FrozenLake-v1 .
## Usage
| [
"# Q-Learning Agent playing1 FrozenLake-v1\n This is a trained model of a Q-Learning agent playing FrozenLake-v1 .\n\n ## Usage"
] | [
"TAGS\n#FrozenLake-v1-4x4-no_slippery #q-learning #reinforcement-learning #custom-implementation #model-index #region-us \n",
"# Q-Learning Agent playing1 FrozenLake-v1\n This is a trained model of a Q-Learning agent playing FrozenLake-v1 .\n\n ## Usage"
] | [
35,
33
] | [
"TAGS\n#FrozenLake-v1-4x4-no_slippery #q-learning #reinforcement-learning #custom-implementation #model-index #region-us \n# Q-Learning Agent playing1 FrozenLake-v1\n This is a trained model of a Q-Learning agent playing FrozenLake-v1 .\n\n ## Usage"
] |
text-generation | transformers |
# Uploaded model
- **Developed by:** chenbingAi
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl", "sft"], "base_model": "unsloth/mistral-7b-bnb-4bit"} | chenbingAi/mistral-7b-bnb-4bit-1.0v | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:unsloth/mistral-7b-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"region:us"
] | null | 2024-05-03T15:02:31+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #mistral #text-generation #text-generation-inference #unsloth #trl #sft #en #base_model-unsloth/mistral-7b-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #4-bit #region-us
|
# Uploaded model
- Developed by: chenbingAi
- License: apache-2.0
- Finetuned from model : unsloth/mistral-7b-bnb-4bit
This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: chenbingAi\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #text-generation-inference #unsloth #trl #sft #en #base_model-unsloth/mistral-7b-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #4-bit #region-us \n",
"# Uploaded model\n\n- Developed by: chenbingAi\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
78,
78
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #text-generation-inference #unsloth #trl #sft #en #base_model-unsloth/mistral-7b-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #4-bit #region-us \n# Uploaded model\n\n- Developed by: chenbingAi\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
text-generation | transformers |
# Uploaded model
- **Developed by:** rvian
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-bnb-4bit"} | rvian/llama3-midjourney-prompt-generator | null | [
"transformers",
"pytorch",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T15:03:51+00:00 | [] | [
"en"
] | TAGS
#transformers #pytorch #llama #text-generation #text-generation-inference #unsloth #trl #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: rvian
- License: apache-2.0
- Finetuned from model : unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: rvian\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #pytorch #llama #text-generation #text-generation-inference #unsloth #trl #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: rvian\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
74,
79
] | [
"TAGS\n#transformers #pytorch #llama #text-generation #text-generation-inference #unsloth #trl #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n# Uploaded model\n\n- Developed by: rvian\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | dzungPaduahsgs/Vistral7B_mix_v4_adamany_model_batch_32_lr_2e-5_12h40_merged | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T15:04:27+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
26,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# witness_count_mistral_train_run2
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3
### Training results
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1 | {"license": "apache-2.0", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "mistralai/Mistral-7B-Instruct-v0.1", "model-index": [{"name": "witness_count_mistral_train_run2", "results": []}]} | isaaclee/witness_count_mistral_train_run2 | null | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-Instruct-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2024-05-03T15:04:30+00:00 | [] | [] | TAGS
#peft #safetensors #trl #sft #generated_from_trainer #base_model-mistralai/Mistral-7B-Instruct-v0.1 #license-apache-2.0 #region-us
|
# witness_count_mistral_train_run2
This model is a fine-tuned version of mistralai/Mistral-7B-Instruct-v0.1 on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3
### Training results
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1 | [
"# witness_count_mistral_train_run2\n\nThis model is a fine-tuned version of mistralai/Mistral-7B-Instruct-v0.1 on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 1\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 2\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- PEFT 0.7.2.dev0\n- Transformers 4.36.2\n- Pytorch 2.1.2+cu121\n- Datasets 2.16.1\n- Tokenizers 0.15.1"
] | [
"TAGS\n#peft #safetensors #trl #sft #generated_from_trainer #base_model-mistralai/Mistral-7B-Instruct-v0.1 #license-apache-2.0 #region-us \n",
"# witness_count_mistral_train_run2\n\nThis model is a fine-tuned version of mistralai/Mistral-7B-Instruct-v0.1 on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 1\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 2\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- PEFT 0.7.2.dev0\n- Transformers 4.36.2\n- Pytorch 2.1.2+cu121\n- Datasets 2.16.1\n- Tokenizers 0.15.1"
] | [
54,
44,
7,
9,
9,
4,
126,
5,
55
] | [
"TAGS\n#peft #safetensors #trl #sft #generated_from_trainer #base_model-mistralai/Mistral-7B-Instruct-v0.1 #license-apache-2.0 #region-us \n# witness_count_mistral_train_run2\n\nThis model is a fine-tuned version of mistralai/Mistral-7B-Instruct-v0.1 on the None dataset.## Model description\n\nMore information needed## Intended uses & limitations\n\nMore information needed## Training and evaluation data\n\nMore information needed## Training procedure### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 1\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 2\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 3### Training results### Framework versions\n\n- PEFT 0.7.2.dev0\n- Transformers 4.36.2\n- Pytorch 2.1.2+cu121\n- Datasets 2.16.1\n- Tokenizers 0.15.1"
] |
text-generation | transformers | # jsfs11/WestTemptressTensor-10.7B-v0.2a-SLERP AWQ
- Model creator: [jsfs11](https://huggingface.co/jsfs11)
- Original model: [WestTemptressTensor-10.7B-v0.2a-SLERP](https://huggingface.co/jsfs11/WestTemptressTensor-10.7B-v0.2a-SLERP)
## How to use
### Install the necessary packages
```bash
pip install --upgrade autoawq autoawq-kernels
```
### Example Python code
```python
from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer, TextStreamer
model_path = "solidrust/WestTemptressTensor-10.7B-v0.2a-SLERP-AWQ"
system_message = "You are WestTemptressTensor-10.7B-v0.2a-SLERP, incarnated as a powerful AI. You were created by jsfs11."
# Load model
model = AutoAWQForCausalLM.from_quantized(model_path,
fuse_layers=True)
tokenizer = AutoTokenizer.from_pretrained(model_path,
trust_remote_code=True)
streamer = TextStreamer(tokenizer,
skip_prompt=True,
skip_special_tokens=True)
# Convert prompt to tokens
prompt_template = """\
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant"""
prompt = "You're standing on the surface of the Earth. "\
"You walk one mile south, one mile west and one mile north. "\
"You end up exactly where you started. Where are you?"
tokens = tokenizer(prompt_template.format(system_message=system_message,prompt=prompt),
return_tensors='pt').input_ids.cuda()
# Generate output
generation_output = model.generate(tokens,
streamer=streamer,
max_new_tokens=512)
```
### About AWQ
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.
It is supported by:
- [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ
- [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types.
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
| {"library_name": "transformers", "tags": ["4-bit", "AWQ", "text-generation", "autotrain_compatible", "endpoints_compatible"], "pipeline_tag": "text-generation", "inference": false, "quantized_by": "Suparious"} | solidrust/WestTemptressTensor-10.7B-v0.2a-SLERP-AWQ | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"4-bit",
"AWQ",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-03T15:05:10+00:00 | [] | [] | TAGS
#transformers #safetensors #mistral #text-generation #4-bit #AWQ #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| # jsfs11/WestTemptressTensor-10.7B-v0.2a-SLERP AWQ
- Model creator: jsfs11
- Original model: WestTemptressTensor-10.7B-v0.2a-SLERP
## How to use
### Install the necessary packages
### Example Python code
### About AWQ
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.
It is supported by:
- Text Generation Webui - using Loader: AutoAWQ
- vLLM - version 0.2.2 or later for support for all model types.
- Hugging Face Text Generation Inference (TGI)
- Transformers version 4.35.0 and later, from any code or client that supports Transformers
- AutoAWQ - for use from Python code
| [
"# jsfs11/WestTemptressTensor-10.7B-v0.2a-SLERP AWQ\n\n- Model creator: jsfs11\n- Original model: WestTemptressTensor-10.7B-v0.2a-SLERP",
"## How to use",
"### Install the necessary packages",
"### Example Python code",
"### About AWQ\n\nAWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.\n\nAWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.\n\nIt is supported by:\n\n- Text Generation Webui - using Loader: AutoAWQ\n- vLLM - version 0.2.2 or later for support for all model types.\n- Hugging Face Text Generation Inference (TGI)\n- Transformers version 4.35.0 and later, from any code or client that supports Transformers\n- AutoAWQ - for use from Python code"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #4-bit #AWQ #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# jsfs11/WestTemptressTensor-10.7B-v0.2a-SLERP AWQ\n\n- Model creator: jsfs11\n- Original model: WestTemptressTensor-10.7B-v0.2a-SLERP",
"## How to use",
"### Install the necessary packages",
"### Example Python code",
"### About AWQ\n\nAWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.\n\nAWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.\n\nIt is supported by:\n\n- Text Generation Webui - using Loader: AutoAWQ\n- vLLM - version 0.2.2 or later for support for all model types.\n- Hugging Face Text Generation Inference (TGI)\n- Transformers version 4.35.0 and later, from any code or client that supports Transformers\n- AutoAWQ - for use from Python code"
] | [
41,
60,
5,
7,
6,
172
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #4-bit #AWQ #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# jsfs11/WestTemptressTensor-10.7B-v0.2a-SLERP AWQ\n\n- Model creator: jsfs11\n- Original model: WestTemptressTensor-10.7B-v0.2a-SLERP## How to use### Install the necessary packages### Example Python code### About AWQ\n\nAWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.\n\nAWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.\n\nIt is supported by:\n\n- Text Generation Webui - using Loader: AutoAWQ\n- vLLM - version 0.2.2 or later for support for all model types.\n- Hugging Face Text Generation Inference (TGI)\n- Transformers version 4.35.0 and later, from any code or client that supports Transformers\n- AutoAWQ - for use from Python code"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_prom_prom_300_tata-seqsight_65536_512_47M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_prom_prom_300_tata](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_300_tata) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4868
- F1 Score: 0.7961
- Accuracy: 0.7961
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.6088 | 5.13 | 200 | 0.5384 | 0.7487 | 0.7520 |
| 0.4928 | 10.26 | 400 | 0.5026 | 0.7783 | 0.7781 |
| 0.4693 | 15.38 | 600 | 0.4846 | 0.7717 | 0.7716 |
| 0.4521 | 20.51 | 800 | 0.4655 | 0.7841 | 0.7847 |
| 0.4428 | 25.64 | 1000 | 0.4595 | 0.7961 | 0.7961 |
| 0.4323 | 30.77 | 1200 | 0.4505 | 0.8009 | 0.8010 |
| 0.4233 | 35.9 | 1400 | 0.4517 | 0.8060 | 0.8059 |
| 0.4143 | 41.03 | 1600 | 0.4466 | 0.8039 | 0.8042 |
| 0.4078 | 46.15 | 1800 | 0.4488 | 0.8076 | 0.8075 |
| 0.401 | 51.28 | 2000 | 0.4429 | 0.8125 | 0.8124 |
| 0.3939 | 56.41 | 2200 | 0.4469 | 0.8040 | 0.8042 |
| 0.3921 | 61.54 | 2400 | 0.4504 | 0.8093 | 0.8091 |
| 0.3851 | 66.67 | 2600 | 0.4505 | 0.8093 | 0.8091 |
| 0.3812 | 71.79 | 2800 | 0.4460 | 0.8060 | 0.8059 |
| 0.3806 | 76.92 | 3000 | 0.4651 | 0.7993 | 0.7993 |
| 0.3745 | 82.05 | 3200 | 0.4532 | 0.8093 | 0.8091 |
| 0.3722 | 87.18 | 3400 | 0.4718 | 0.7976 | 0.7977 |
| 0.3652 | 92.31 | 3600 | 0.4520 | 0.8142 | 0.8140 |
| 0.367 | 97.44 | 3800 | 0.4515 | 0.8109 | 0.8108 |
| 0.3615 | 102.56 | 4000 | 0.4595 | 0.8109 | 0.8108 |
| 0.3633 | 107.69 | 4200 | 0.4684 | 0.7978 | 0.7977 |
| 0.3561 | 112.82 | 4400 | 0.4668 | 0.8093 | 0.8091 |
| 0.3533 | 117.95 | 4600 | 0.4705 | 0.8044 | 0.8042 |
| 0.351 | 123.08 | 4800 | 0.4721 | 0.8060 | 0.8059 |
| 0.3528 | 128.21 | 5000 | 0.4621 | 0.8141 | 0.8140 |
| 0.3479 | 133.33 | 5200 | 0.4673 | 0.8092 | 0.8091 |
| 0.3466 | 138.46 | 5400 | 0.4637 | 0.8140 | 0.8140 |
| 0.3432 | 143.59 | 5600 | 0.4672 | 0.8141 | 0.8140 |
| 0.3426 | 148.72 | 5800 | 0.4673 | 0.8158 | 0.8157 |
| 0.3417 | 153.85 | 6000 | 0.4708 | 0.8093 | 0.8091 |
| 0.3424 | 158.97 | 6200 | 0.4735 | 0.8125 | 0.8124 |
| 0.3372 | 164.1 | 6400 | 0.4821 | 0.8076 | 0.8075 |
| 0.3379 | 169.23 | 6600 | 0.4719 | 0.8125 | 0.8124 |
| 0.3357 | 174.36 | 6800 | 0.4774 | 0.8125 | 0.8124 |
| 0.332 | 179.49 | 7000 | 0.4816 | 0.8093 | 0.8091 |
| 0.3329 | 184.62 | 7200 | 0.4783 | 0.8124 | 0.8124 |
| 0.3318 | 189.74 | 7400 | 0.4833 | 0.8093 | 0.8091 |
| 0.3303 | 194.87 | 7600 | 0.4834 | 0.8125 | 0.8124 |
| 0.3287 | 200.0 | 7800 | 0.4815 | 0.8108 | 0.8108 |
| 0.3285 | 205.13 | 8000 | 0.4822 | 0.8174 | 0.8173 |
| 0.3328 | 210.26 | 8200 | 0.4839 | 0.8093 | 0.8091 |
| 0.3267 | 215.38 | 8400 | 0.4831 | 0.8125 | 0.8124 |
| 0.3287 | 220.51 | 8600 | 0.4837 | 0.8125 | 0.8124 |
| 0.3268 | 225.64 | 8800 | 0.4911 | 0.8027 | 0.8026 |
| 0.3265 | 230.77 | 9000 | 0.4880 | 0.8076 | 0.8075 |
| 0.3275 | 235.9 | 9200 | 0.4868 | 0.8093 | 0.8091 |
| 0.3238 | 241.03 | 9400 | 0.4868 | 0.8109 | 0.8108 |
| 0.3273 | 246.15 | 9600 | 0.4869 | 0.8093 | 0.8091 |
| 0.3274 | 251.28 | 9800 | 0.4875 | 0.8109 | 0.8108 |
| 0.3265 | 256.41 | 10000 | 0.4874 | 0.8109 | 0.8108 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_prom_prom_300_tata-seqsight_65536_512_47M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_prom_prom_300_tata-seqsight_65536_512_47M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_65536_512_47M",
"region:us"
] | null | 2024-05-03T15:05:35+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us
| GUE\_prom\_prom\_300\_tata-seqsight\_65536\_512\_47M-L1\_f
==========================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_65536\_512\_47M on the mahdibaghbanzadeh/GUE\_prom\_prom\_300\_tata dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4868
* F1 Score: 0.7961
* Accuracy: 0.7961
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
43,
100,
5,
52
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | huntz47/qwenm10 | null | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-03T15:06:27+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #qwen2 #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #qwen2 #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
48,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#transformers #safetensors #qwen2 #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |