modelId
stringlengths 5
122
| author
stringlengths 2
42
| last_modified
unknown | downloads
int64 0
738M
| likes
int64 0
11k
| library_name
stringclasses 245
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 48
values | createdAt
unknown | card
stringlengths 1
901k
|
---|---|---|---|---|---|---|---|---|---|
Roamify/llama_3_model | Roamify | "2024-06-21T22:35:33Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-21T22:35:23Z" | ---
base_model: unsloth/llama-3-8b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** RoamifyRedefined
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
1231czx/2b_1_nll_dpo_iter2_200step | 1231czx | "2024-06-21T22:37:56Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-06-21T22:35:44Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
jdgallegoq96/tinyllama_instruct | jdgallegoq96 | "2024-06-21T22:38:00Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-21T22:38:00Z" | Entry not found |
kanishka/smolm-autoreg-bpe-counterfactual_babylm_measure_nps_as_singular_new-seed_211-1e-3 | kanishka | "2024-06-22T21:21:29Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"opt",
"text-generation",
"generated_from_trainer",
"dataset:kanishka/counterfactual_babylm_measure_nps_as_singular_new",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-06-21T22:44:51Z" | ---
tags:
- generated_from_trainer
datasets:
- kanishka/counterfactual_babylm_measure_nps_as_singular_new
metrics:
- accuracy
model-index:
- name: smolm-autoreg-bpe-counterfactual_babylm_measure_nps_as_singular_new-seed_211-1e-3
results:
- task:
name: Causal Language Modeling
type: text-generation
dataset:
name: kanishka/counterfactual_babylm_measure_nps_as_singular_new
type: kanishka/counterfactual_babylm_measure_nps_as_singular_new
metrics:
- name: Accuracy
type: accuracy
value: 0.4093553697888651
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smolm-autoreg-bpe-counterfactual_babylm_measure_nps_as_singular_new-seed_211-1e-3
This model was trained from scratch on the kanishka/counterfactual_babylm_measure_nps_as_singular_new dataset.
It achieves the following results on the evaluation set:
- Loss: 3.4270
- Accuracy: 0.4094
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 64
- seed: 211
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 32000
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 3.6072 | 1.0 | 18602 | 3.7687 | 0.3592 |
| 3.3848 | 2.0 | 37204 | 3.5595 | 0.3802 |
| 3.2576 | 3.0 | 55806 | 3.4654 | 0.3927 |
| 3.177 | 4.0 | 74408 | 3.4207 | 0.3982 |
| 3.1212 | 5.0 | 93010 | 3.4026 | 0.4006 |
| 3.0724 | 6.0 | 111612 | 3.3763 | 0.4035 |
| 3.0373 | 7.0 | 130214 | 3.3708 | 0.4051 |
| 3.0102 | 8.0 | 148816 | 3.3649 | 0.4063 |
| 2.9818 | 9.0 | 167418 | 3.3810 | 0.4072 |
| 2.9526 | 10.0 | 186020 | 3.3640 | 0.4078 |
| 2.9332 | 11.0 | 204622 | 3.3817 | 0.4081 |
| 2.9076 | 12.0 | 223224 | 3.3767 | 0.4087 |
| 2.8857 | 13.0 | 241826 | 3.3850 | 0.4089 |
| 2.8653 | 14.0 | 260428 | 3.3919 | 0.4093 |
| 2.8483 | 15.0 | 279030 | 3.3888 | 0.4091 |
| 2.828 | 16.0 | 297632 | 3.4040 | 0.4093 |
| 2.8069 | 17.0 | 316234 | 3.4020 | 0.4094 |
| 2.7906 | 18.0 | 334836 | 3.4096 | 0.4096 |
| 2.7701 | 19.0 | 353438 | 3.4215 | 0.4093 |
| 2.7515 | 20.0 | 372040 | 3.4270 | 0.4094 |
### Framework versions
- Transformers 4.38.0
- Pytorch 2.3.1+cu121
- Datasets 2.16.1
- Tokenizers 0.15.2
|
Ekon2002/cumshottwo | Ekon2002 | "2024-06-21T22:48:15Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-21T22:46:23Z" | Entry not found |
P0x0/mergekit-model_stock-hvubjjx | P0x0 | "2024-06-21T22:46:32Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-21T22:46:32Z" | Entry not found |
youssef227/llama-3-8b-Instruct-bnb-telcom | youssef227 | "2024-06-21T23:01:19Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"region:us"
] | null | "2024-06-21T22:47:05Z" | ---
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.11.1 |
P0x0/mergekit-task_arithmetic-rdblenk | P0x0 | "2024-06-21T22:48:04Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-21T22:48:04Z" | Entry not found |
Nareekk/naree | Nareekk | "2024-06-21T22:48:37Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-21T22:48:37Z" | Entry not found |
youssef227/llama-3-8b-Instruct-bnb-telcom-2 | youssef227 | "2024-06-21T22:52:59Z" | 0 | 0 | null | [
"license:llama3",
"region:us"
] | null | "2024-06-21T22:52:59Z" | ---
license: llama3
---
|
mrunalmania/palligemma-cord-test | mrunalmania | "2024-06-21T23:45:15Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-06-21T22:57:07Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
jannikjw/phi2_DPO | jannikjw | "2024-06-21T22:57:40Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-06-21T22:57:28Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
muyiiwaa/post_classifier | muyiiwaa | "2024-06-21T23:01:10Z" | 0 | 0 | keras | [
"keras",
"license:mit",
"region:us"
] | null | "2024-06-21T22:57:45Z" | ---
license: mit
---
|
breno1996/brenio27 | breno1996 | "2024-06-21T23:02:49Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2024-06-21T23:02:31Z" | ---
license: openrail
---
|
SiMajid/xlm-roberta-base | SiMajid | "2024-06-21T23:05:26Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"text-classification",
"trl",
"reward-trainer",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-06-21T23:04:52Z" | ---
license: mit
base_model: FacebookAI/xlm-roberta-base
tags:
- trl
- reward-trainer
- generated_from_trainer
model-index:
- name: xlm-roberta-base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base
This model is a fine-tuned version of [FacebookAI/xlm-roberta-base](https://huggingface.co/FacebookAI/xlm-roberta-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.41e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25.0
### Training results
### Framework versions
- Transformers 4.41.2
- Pytorch 2.1.2
- Datasets 2.19.2
- Tokenizers 0.19.1
|
nihil117/QJab_v.01 | nihil117 | "2024-06-21T23:08:19Z" | 0 | 0 | transformers | [
"transformers",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/mistral-7b-v0.3-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-21T23:08:14Z" | ---
base_model: unsloth/mistral-7b-v0.3-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
---
# Uploaded model
- **Developed by:** nihil117
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-v0.3-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
breno1996/brenio45 | breno1996 | "2024-06-21T23:13:27Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2024-06-21T23:13:07Z" | ---
license: openrail
---
|
sandyyuan/galaxyfactory | sandyyuan | "2024-06-21T23:13:35Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-21T23:13:35Z" | Entry not found |
fruk19/C_ASR_MID | fruk19 | "2024-06-22T06:58:16Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"th",
"dataset:fruk19/C_SMALL",
"base_model:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-06-21T23:16:10Z" | ---
language:
- th
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- fruk19/C_SMALL
metrics:
- wer
model-index:
- name: South_asri
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: aicookcook
type: fruk19/C_SMALL
config: default
split: None
args: 'config: th'
metrics:
- name: Wer
type: wer
value: 3.7677461386031106
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# South_asri
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the aicookcook dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0347
- Wer: 3.7677
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.0304 | 2.0 | 6000 | 0.0440 | 5.5648 |
| 0.0061 | 4.0 | 12000 | 0.0358 | 4.1532 |
| 0.0007 | 6.0 | 18000 | 0.0347 | 3.7677 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.0.1+cu117
- Datasets 2.20.0
- Tokenizers 0.19.1
|
Vyctra/Mikey | Vyctra | "2024-06-21T23:16:33Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-21T23:16:22Z" | Entry not found |
GraydientPlatformAPI/dreammode-turbo | GraydientPlatformAPI | "2024-06-21T23:53:33Z" | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | "2024-06-21T23:16:51Z" | Entry not found |
Isaac1992/forlora | Isaac1992 | "2024-06-21T23:21:07Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-21T23:18:02Z" | Entry not found |
pushpinder08/bert-surprise | pushpinder08 | "2024-06-21T23:22:12Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2024-06-21T23:21:12Z" | ---
license: apache-2.0
---
|
Hemantrao/wav2vec2-large-xls-r-300m-hindi_marathi-colab-dynamic-loss | Hemantrao | "2024-06-21T23:24:19Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-06-21T23:23:07Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
onnx-community/Florence-2-base | onnx-community | "2024-07-01T11:54:24Z" | 0 | 3 | transformers.js | [
"transformers.js",
"onnx",
"florence2",
"text2text-generation",
"vision",
"text-generation",
"image-text-to-text",
"license:mit",
"region:us"
] | image-text-to-text | "2024-06-21T23:25:59Z" | ---
license: mit
pipeline_tag: image-text-to-text
tags:
- vision
- text-generation
- text2text-generation
- image-text-to-text
library_name: transformers.js
---
https://huggingface.co/microsoft/Florence-2-base with ONNX weights to be compatible with Transformers.js.
## Usage (Transformers.js)
> [!IMPORTANT]
> NOTE: Florence-2 support is experimental and requires you to install Transformers.js [v3](https://github.com/xenova/transformers.js/tree/v3) from source.
If you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [GitHub](https://github.com/xenova/transformers.js/tree/v3) using:
```bash
npm install xenova/transformers.js#v3
```
**Example:** Perform image captioning with `onnx-community/Florence-2-base`.
```js
import {
Florence2ForConditionalGeneration,
AutoProcessor,
AutoTokenizer,
RawImage,
} from '@xenova/transformers';
// Load model, processor, and tokenizer
const model_id = 'onnx-community/Florence-2-base';
const model = await Florence2ForConditionalGeneration.from_pretrained(model_id, { dtype: 'fp32' });
const processor = await AutoProcessor.from_pretrained(model_id);
const tokenizer = await AutoTokenizer.from_pretrained(model_id);
// Load image and prepare vision inputs
const url = 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg';
const image = await RawImage.fromURL(url);
const vision_inputs = await processor(image);
// Specify task and prepare text inputs
const task = '<MORE_DETAILED_CAPTION>';
const prompts = processor.construct_prompts(task);
const text_inputs = tokenizer(prompts);
// Generate text
const generated_ids = await model.generate({
...text_inputs,
...vision_inputs,
max_new_tokens: 100,
});
// Decode generated text
const generated_text = tokenizer.batch_decode(generated_ids, { skip_special_tokens: false })[0];
// Post-process the generated text
const result = processor.post_process_generation(generated_text, task, image.size);
console.log(result);
// { '<MORE_DETAILED_CAPTION>': 'The image shows a vintage Volkswagen Beetle car parked on a cobblestone street in front of a yellow building with two wooden doors. The car is a light green color with silver rims and appears to be in good condition. The building has a sloping roof and is painted in a combination of yellow and beige colors. The sky is blue and there are trees in the background. The overall mood of the image is peaceful and serene.' }
```
We also released an online demo, which you can try yourself: https://huggingface.co/spaces/Xenova/florence2-webgpu
<video controls autoplay src="https://cdn-uploads.huggingface.co/production/uploads/61b253b7ac5ecaae3d1efe0c/BJj3jQXNqS_7Nt2MSb2ss.mp4"></video>
---
Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`).
|
richardkelly/Qwen-Qwen1.5-7B-1719012685 | richardkelly | "2024-06-21T23:31:35Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen1.5-7B",
"region:us"
] | null | "2024-06-21T23:31:26Z" | ---
library_name: peft
base_model: Qwen/Qwen1.5-7B
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.11.1 |
richardkelly/google-gemma-2b-1719012750 | richardkelly | "2024-06-21T23:32:49Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:google/gemma-2b",
"region:us"
] | null | "2024-06-21T23:32:30Z" | ---
library_name: peft
base_model: google/gemma-2b
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.11.1 |
rashid996958/pix2pix_exp32 | rashid996958 | "2024-06-21T23:33:47Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-21T23:33:42Z" | Entry not found |
Cheese619/Cheese619 | Cheese619 | "2024-06-21T23:34:22Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-21T23:34:21Z" | Entry not found |
ahmedesmail16/0.50-5000Train-Test-swinv2-base | ahmedesmail16 | "2024-06-21T23:34:42Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-21T23:34:42Z" | Entry not found |
scottgr/scott1 | scottgr | "2024-06-21T23:35:47Z" | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | "2024-06-21T23:35:47Z" | ---
license: mit
---
|
felipesampaio2010/CaseyBurgessBRMairaParis | felipesampaio2010 | "2024-06-21T23:40:21Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-21T23:40:17Z" | Entry not found |
JEFFERSONMUSIC/JKGOLDENERAV2 | JEFFERSONMUSIC | "2024-06-21T23:41:41Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2024-06-21T23:40:20Z" | ---
license: apache-2.0
---
|
sdadasfgdfgfdg/Gacha_Omnia | sdadasfgdfgfdg | "2024-06-21T23:43:37Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2024-06-21T23:42:37Z" | ---
license: openrail
---
|
RaneemElmahdi/NLP | RaneemElmahdi | "2024-06-22T00:23:14Z" | 0 | 0 | null | [
"text-classification",
"region:us"
] | text-classification | "2024-06-21T23:44:20Z" | ---
pipeline_tag: text-classification
--- |
RyanJT/quantized-tinyllama-8bit-1.1b-chat2 | RyanJT | "2024-06-22T00:42:52Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"8-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-06-21T23:48:43Z" | Entry not found |
Satam/tokenizer | Satam | "2024-06-21T23:51:20Z" | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | "2024-06-21T23:49:56Z" | ---
license: mit
---
|
nasser1/kkk | nasser1 | "2024-06-21T23:50:50Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2024-06-21T23:50:50Z" | ---
license: apache-2.0
---
|
vdavidr/CodeLlama-13b-Instruct-hf_Fi__translations_size_104_epochs_10_2024-06-22_02-50-05_3557641 | vdavidr | "2024-06-22T05:03:47Z" | 0 | 0 | null | [
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:codellama/CodeLlama-13b-Instruct-hf",
"license:llama2",
"region:us"
] | null | "2024-06-21T23:51:19Z" | ---
license: llama2
base_model: codellama/CodeLlama-13b-Instruct-hf
tags:
- generated_from_trainer
metrics:
- accuracy
- bleu
- sacrebleu
- rouge
model-index:
- name: CodeLlama-13b-Instruct-hf_Fi__translations_size_104_epochs_10_2024-06-22_02-50-05_3557641
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CodeLlama-13b-Instruct-hf_Fi__translations_size_104_epochs_10_2024-06-22_02-50-05_3557641
This model is a fine-tuned version of [codellama/CodeLlama-13b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-13b-Instruct-hf) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4179
- Accuracy: 0.035
- Chrf: 0.719
- Bleu: 0.636
- Sacrebleu: 0.6
- Rouge1: 0.678
- Rouge2: 0.497
- Rougel: 0.645
- Rougelsum: 0.675
- Meteor: 0.594
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 3407
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 4
- total_eval_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-06
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 104
- training_steps: 1040
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Chrf | Bleu | Sacrebleu | Rouge1 | Rouge2 | Rougel | Rougelsum | Meteor |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-----:|:-----:|:---------:|:------:|:------:|:------:|:---------:|:------:|
| 0.3439 | 4.0 | 104 | 2.0524 | 0.036 | 0.581 | 0.472 | 0.5 | 0.556 | 0.309 | 0.5 | 0.55 | 0.493 |
| 0.194 | 8.0 | 208 | 2.0784 | 0.036 | 0.55 | 0.435 | 0.4 | 0.519 | 0.275 | 0.479 | 0.513 | 0.466 |
| 0.2001 | 12.0 | 312 | 2.1078 | 0.061 | 0.564 | 0.456 | 0.5 | 0.544 | 0.291 | 0.504 | 0.538 | 0.502 |
| 0.5322 | 16.0 | 416 | 1.7052 | 0.037 | 0.64 | 0.527 | 0.5 | 0.6 | 0.366 | 0.563 | 0.598 | 0.524 |
| 0.1677 | 20.0 | 520 | 1.8442 | 0.037 | 0.606 | 0.493 | 0.5 | 0.566 | 0.334 | 0.529 | 0.557 | 0.496 |
| 0.1649 | 24.0 | 624 | 1.6364 | 0.037 | 0.66 | 0.558 | 0.6 | 0.627 | 0.408 | 0.593 | 0.621 | 0.535 |
| 0.6799 | 28.0 | 728 | 1.5108 | 0.034 | 0.692 | 0.597 | 0.6 | 0.647 | 0.452 | 0.608 | 0.644 | 0.541 |
| 0.1644 | 32.0 | 832 | 1.4941 | 0.035 | 0.709 | 0.623 | 0.6 | 0.671 | 0.478 | 0.634 | 0.667 | 0.553 |
| 0.2382 | 36.0 | 936 | 1.4313 | 0.035 | 0.715 | 0.633 | 0.6 | 0.681 | 0.494 | 0.645 | 0.679 | 0.556 |
| 0.1956 | 40.0 | 1040 | 1.4179 | 0.035 | 0.719 | 0.636 | 0.6 | 0.678 | 0.497 | 0.645 | 0.675 | 0.594 |
### Framework versions
- Transformers 4.37.0
- Pytorch 2.2.1+cu121
- Datasets 2.20.0
- Tokenizers 0.15.2
|
Vyctra/Matteo | Vyctra | "2024-06-21T23:56:49Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-21T23:56:43Z" | Entry not found |
yizhujiao/sft_openassistant-guanaco | yizhujiao | "2024-06-26T13:20:36Z" | 0 | 0 | null | [
"safetensors",
"region:us"
] | null | "2024-06-21T23:58:02Z" | Entry not found |
Bertinho24/Yoon2 | Bertinho24 | "2024-06-22T00:02:00Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2024-06-22T00:01:27Z" | ---
license: openrail
---
|
BAKKALIAYOUB/testq | BAKKALIAYOUB | "2024-06-22T00:09:00Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-22T00:09:00Z" | Entry not found |
padilfm/natural-scenes-image-classification-cnn | padilfm | "2024-06-22T01:04:59Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-22T00:09:45Z" | This is a model for scene classidication |
vdavidr/CodeLlama-7b-Instruct-hf_Fi__translations_size_104_epochs_10_2024-06-22_03-11-17_3557642 | vdavidr | "2024-06-22T03:32:14Z" | 0 | 0 | null | [
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:codellama/CodeLlama-7b-Instruct-hf",
"license:llama2",
"region:us"
] | null | "2024-06-22T00:11:59Z" | ---
license: llama2
base_model: codellama/CodeLlama-7b-Instruct-hf
tags:
- generated_from_trainer
metrics:
- accuracy
- bleu
- sacrebleu
- rouge
model-index:
- name: CodeLlama-7b-Instruct-hf_Fi__translations_size_104_epochs_10_2024-06-22_03-11-17_3557642
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CodeLlama-7b-Instruct-hf_Fi__translations_size_104_epochs_10_2024-06-22_03-11-17_3557642
This model is a fine-tuned version of [codellama/CodeLlama-7b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-7b-Instruct-hf) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3204
- Accuracy: 0.035
- Chrf: 0.736
- Bleu: 0.656
- Sacrebleu: 0.7
- Rouge1: 0.689
- Rouge2: 0.506
- Rougel: 0.657
- Rougelsum: 0.684
- Meteor: 0.613
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 3407
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 4
- total_eval_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-06
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 104
- training_steps: 1040
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Chrf | Bleu | Sacrebleu | Rouge1 | Rouge2 | Rougel | Rougelsum | Meteor |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-----:|:-----:|:---------:|:------:|:------:|:------:|:---------:|:------:|
| 0.2911 | 4.0 | 104 | 1.6937 | 0.042 | 0.623 | 0.509 | 0.5 | 0.586 | 0.333 | 0.536 | 0.584 | 0.513 |
| 0.1827 | 8.0 | 208 | 1.9669 | 0.04 | 0.584 | 0.47 | 0.5 | 0.532 | 0.309 | 0.495 | 0.528 | 0.48 |
| 0.1801 | 12.0 | 312 | 1.7206 | 0.039 | 0.621 | 0.53 | 0.5 | 0.582 | 0.353 | 0.55 | 0.579 | 0.538 |
| 0.5158 | 16.0 | 416 | 1.5969 | 0.038 | 0.654 | 0.536 | 0.5 | 0.602 | 0.368 | 0.565 | 0.599 | 0.52 |
| 0.1586 | 20.0 | 520 | 1.6966 | 0.037 | 0.632 | 0.522 | 0.5 | 0.576 | 0.357 | 0.544 | 0.571 | 0.513 |
| 0.1584 | 24.0 | 624 | 1.5539 | 0.037 | 0.694 | 0.597 | 0.6 | 0.651 | 0.447 | 0.61 | 0.643 | 0.564 |
| 0.6664 | 28.0 | 728 | 1.4467 | 0.035 | 0.701 | 0.611 | 0.6 | 0.655 | 0.452 | 0.616 | 0.651 | 0.574 |
| 0.171 | 32.0 | 832 | 1.6334 | 0.033 | 0.68 | 0.568 | 0.6 | 0.633 | 0.417 | 0.59 | 0.627 | 0.526 |
| 0.227 | 36.0 | 936 | 1.3442 | 0.035 | 0.727 | 0.64 | 0.6 | 0.682 | 0.491 | 0.645 | 0.679 | 0.577 |
| 0.1831 | 40.0 | 1040 | 1.3204 | 0.035 | 0.736 | 0.656 | 0.7 | 0.689 | 0.506 | 0.657 | 0.684 | 0.613 |
### Framework versions
- Transformers 4.37.0
- Pytorch 2.2.1+cu121
- Datasets 2.20.0
- Tokenizers 0.15.2
|
tannedbum/L3-Nymeria-Maid-8B-exl2 | tannedbum | "2024-06-26T03:08:03Z" | 0 | 0 | null | [
"roleplay",
"sillytavern",
"llama3",
"exl2",
"not-for-all-audiences",
"en",
"license:cc-by-nc-4.0",
"region:us"
] | null | "2024-06-22T00:13:25Z" | ---
license: cc-by-nc-4.0
quantized_by: tannedbum
language:
- en
tags:
- roleplay
- sillytavern
- llama3
- exl2
- not-for-all-audiences
---
![Nymeria](https://huggingface.co/tannedbum/L3-Nymeria-Maid-8B-exl2/resolve/main/Nymeria.png?)
## This version is solely for scientific purposes, of course.
Nymeria is the balanced version, doesn't force nsfw. Nymeria-Maid has more Stheno's weights, leans more on nsfw and is more submissive.
## Available quants
- [8.0 bpw](https://huggingface.co/tannedbum/L3-Nymeria-Maid-8B-exl2/tree/8.0)
- [6.5 bpw](https://huggingface.co/tannedbum/L3-Nymeria-Maid-8B-exl2/tree/6.5)
- [5.0 bpw](https://huggingface.co/tannedbum/L3-Nymeria-Maid-8B-exl2/tree/5.0)
- [4.25 bpw](https://huggingface.co/tannedbum/L3-Nymeria-Maid-8B-exl2/tree/4.25)
## Download with git:
```shell
git clone --single-branch --branch 6.5 https://huggingface.co/tannedbum/L3-Nymeria-Maid-8B-exl2 L3-Nymeria-Maid-8B-exl2-6.5
```
## SillyTavern
## Text Completion presets
```
temp 0.9
top_k 30
top_p 0.75
min_p 0.2
rep_pen 1.1
smooth_factor 0.25
smooth_curve 1
```
## Advanced Formatting
[Context & Instruct preset by Virt-io](https://huggingface.co/Virt-io/SillyTavern-Presets/tree/main/Prompts/LLAMA-3/v2.0)
Instruct Mode: Enabled
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
This model was merged using the slerp merge method.
### Models Merged
The following models were included in the merge:
* [Sao10K/L3-8B-Stheno-v3.2](https://huggingface.co/Sao10K/L3-8B-Stheno-v3.2)
* [princeton-nlp/Llama-3-Instruct-8B-SimPO](https://huggingface.co/princeton-nlp/Llama-3-Instruct-8B-SimPO)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: Sao10K/L3-8B-Stheno-v3.2
layer_range: [0, 32]
- model: princeton-nlp/Llama-3-Instruct-8B-SimPO
layer_range: [0, 32]
merge_method: slerp
base_model: Sao10K/L3-8B-Stheno-v3.2
parameters:
t:
- filter: self_attn
value: [0.2, 0.4, 0.4, 0.6]
- filter: mlp
value: [0.8, 0.6, 0.6, 0.4]
- value: 0.4
dtype: bfloat16
```
---
## Original model information:
## Model: Sao10K/L3-8B-Stheno-v3.2
Stheno-v3.2-Zeta
Changes compared to v3.1
<br>\- Included a mix of SFW and NSFW Storywriting Data, thanks to [Gryphe](https://huggingface.co/datasets/Gryphe/Opus-WritingPrompts)
<br>\- Included More Instruct / Assistant-Style Data
<br>\- Further cleaned up Roleplaying Samples from c2 Logs -> A few terrible, really bad samples escaped heavy filtering. Manual pass fixed it.
<br>\- Hyperparameter tinkering for training, resulting in lower loss levels.
Testing Notes - Compared to v3.1
<br>\- Handles SFW / NSFW seperately better. Not as overly excessive with NSFW now. Kinda balanced.
<br>\- Better at Storywriting / Narration.
<br>\- Better at Assistant-type Tasks.
<br>\- Better Multi-Turn Coherency -> Reduced Issues?
<br>\- Slightly less creative? A worthy tradeoff. Still creative.
<br>\- Better prompt / instruction adherence.
---
Want to support my work ? My Ko-fi page: https://ko-fi.com/tannedbum |
Vyctra/enrico | Vyctra | "2024-06-22T00:16:28Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-22T00:16:21Z" | Entry not found |
Frixi/rbxed | Frixi | "2024-06-22T00:26:36Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2024-06-22T00:17:18Z" | ---
license: openrail
---
|
bendavis78/dolphin-2.7-mixtral-8x7b-3.5bpw-h6-exl2-Q4_K_M-GGUF | bendavis78 | "2024-06-22T00:20:56Z" | 0 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"en",
"dataset:ehartford/dolphin",
"dataset:jondurbin/airoboros-2.2.1",
"dataset:ehartford/dolphin-coder",
"dataset:teknium/openhermes",
"dataset:ise-uiuc/Magicoder-OSS-Instruct-75K",
"dataset:ise-uiuc/Magicoder-Evol-Instruct-110K",
"dataset:LDJnr/Capybara",
"base_model:LoneStriker/dolphin-2.7-mixtral-8x7b-3.5bpw-h6-exl2",
"license:apache-2.0",
"region:us"
] | null | "2024-06-22T00:20:55Z" | ---
base_model: LoneStriker/dolphin-2.7-mixtral-8x7b-3.5bpw-h6-exl2
datasets:
- ehartford/dolphin
- jondurbin/airoboros-2.2.1
- ehartford/dolphin-coder
- teknium/openhermes
- ise-uiuc/Magicoder-OSS-Instruct-75K
- ise-uiuc/Magicoder-Evol-Instruct-110K
- LDJnr/Capybara
language:
- en
license: apache-2.0
tags:
- llama-cpp
- gguf-my-repo
---
# bendavis78/dolphin-2.7-mixtral-8x7b-3.5bpw-h6-exl2-Q4_K_M-GGUF
This model was converted to GGUF format from [`LoneStriker/dolphin-2.7-mixtral-8x7b-3.5bpw-h6-exl2`](https://huggingface.co/LoneStriker/dolphin-2.7-mixtral-8x7b-3.5bpw-h6-exl2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/LoneStriker/dolphin-2.7-mixtral-8x7b-3.5bpw-h6-exl2) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo bendavis78/dolphin-2.7-mixtral-8x7b-3.5bpw-h6-exl2-Q4_K_M-GGUF --hf-file dolphin-2.7-mixtral-8x7b-3.5bpw-h6-exl2-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo bendavis78/dolphin-2.7-mixtral-8x7b-3.5bpw-h6-exl2-Q4_K_M-GGUF --hf-file dolphin-2.7-mixtral-8x7b-3.5bpw-h6-exl2-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo bendavis78/dolphin-2.7-mixtral-8x7b-3.5bpw-h6-exl2-Q4_K_M-GGUF --hf-file dolphin-2.7-mixtral-8x7b-3.5bpw-h6-exl2-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo bendavis78/dolphin-2.7-mixtral-8x7b-3.5bpw-h6-exl2-Q4_K_M-GGUF --hf-file dolphin-2.7-mixtral-8x7b-3.5bpw-h6-exl2-q4_k_m.gguf -c 2048
```
|
vdavidr/OpenCodeInterpreter-DS-6.7B_En__translations_size_104_epochs_10_2024-06-22_03-22-06_3557996 | vdavidr | "2024-06-22T03:42:49Z" | 0 | 0 | null | [
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:m-a-p/OpenCodeInterpreter-DS-6.7B",
"license:apache-2.0",
"region:us"
] | null | "2024-06-22T00:22:51Z" | ---
license: apache-2.0
base_model: m-a-p/OpenCodeInterpreter-DS-6.7B
tags:
- generated_from_trainer
metrics:
- accuracy
- bleu
- sacrebleu
- rouge
model-index:
- name: OpenCodeInterpreter-DS-6.7B_En__translations_size_104_epochs_10_2024-06-22_03-22-06_3557996
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# OpenCodeInterpreter-DS-6.7B_En__translations_size_104_epochs_10_2024-06-22_03-22-06_3557996
This model is a fine-tuned version of [m-a-p/OpenCodeInterpreter-DS-6.7B](https://huggingface.co/m-a-p/OpenCodeInterpreter-DS-6.7B) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1148
- Accuracy: 0.042
- Chrf: 0.511
- Bleu: 0.407
- Sacrebleu: 0.4
- Rouge1: 0.516
- Rouge2: 0.259
- Rougel: 0.474
- Rougelsum: 0.508
- Meteor: 0.404
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 3407
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 4
- total_eval_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-06
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 104
- training_steps: 1040
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Chrf | Bleu | Sacrebleu | Rouge1 | Rouge2 | Rougel | Rougelsum | Meteor |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-----:|:-----:|:---------:|:------:|:------:|:------:|:---------:|:------:|
| 0.145 | 4.0 | 104 | 1.2220 | 0.035 | 0.711 | 0.572 | 0.6 | 0.657 | 0.424 | 0.585 | 0.646 | 0.499 |
| 0.1118 | 8.0 | 208 | 1.3598 | 0.034 | 0.708 | 0.563 | 0.6 | 0.66 | 0.421 | 0.586 | 0.641 | 0.519 |
| 0.1119 | 12.0 | 312 | 1.4868 | 0.037 | 0.719 | 0.586 | 0.6 | 0.674 | 0.441 | 0.597 | 0.659 | 0.522 |
| 0.4556 | 16.0 | 416 | 1.6801 | 0.04 | 0.704 | 0.587 | 0.6 | 0.697 | 0.468 | 0.613 | 0.682 | 0.525 |
| 0.1586 | 20.0 | 520 | 2.0607 | 0.043 | 0.672 | 0.552 | 0.6 | 0.659 | 0.422 | 0.584 | 0.65 | 0.46 |
| 0.2311 | 24.0 | 624 | 2.6808 | 0.039 | 0.57 | 0.449 | 0.4 | 0.574 | 0.312 | 0.504 | 0.566 | 0.42 |
| 1.0024 | 28.0 | 728 | 2.7805 | 0.055 | 0.569 | 0.462 | 0.5 | 0.572 | 0.322 | 0.52 | 0.56 | 0.413 |
| 0.2281 | 32.0 | 832 | 2.9914 | 0.044 | 0.546 | 0.44 | 0.4 | 0.548 | 0.296 | 0.504 | 0.542 | 0.409 |
| 0.3755 | 36.0 | 936 | 3.0919 | 0.045 | 0.508 | 0.406 | 0.4 | 0.522 | 0.261 | 0.479 | 0.515 | 0.407 |
| 0.2274 | 40.0 | 1040 | 3.1148 | 0.042 | 0.511 | 0.407 | 0.4 | 0.516 | 0.259 | 0.474 | 0.508 | 0.404 |
### Framework versions
- Transformers 4.37.0
- Pytorch 2.2.1+cu121
- Datasets 2.20.0
- Tokenizers 0.15.2
|
bendavis78/dolphin-2.7-mixtral-8x7b-3.5bpw-h6-exl2-Q8_0-GGUF | bendavis78 | "2024-06-22T00:22:55Z" | 0 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"en",
"dataset:ehartford/dolphin",
"dataset:jondurbin/airoboros-2.2.1",
"dataset:ehartford/dolphin-coder",
"dataset:teknium/openhermes",
"dataset:ise-uiuc/Magicoder-OSS-Instruct-75K",
"dataset:ise-uiuc/Magicoder-Evol-Instruct-110K",
"dataset:LDJnr/Capybara",
"base_model:LoneStriker/dolphin-2.7-mixtral-8x7b-3.5bpw-h6-exl2",
"license:apache-2.0",
"region:us"
] | null | "2024-06-22T00:22:53Z" | ---
base_model: LoneStriker/dolphin-2.7-mixtral-8x7b-3.5bpw-h6-exl2
datasets:
- ehartford/dolphin
- jondurbin/airoboros-2.2.1
- ehartford/dolphin-coder
- teknium/openhermes
- ise-uiuc/Magicoder-OSS-Instruct-75K
- ise-uiuc/Magicoder-Evol-Instruct-110K
- LDJnr/Capybara
language:
- en
license: apache-2.0
tags:
- llama-cpp
- gguf-my-repo
---
# bendavis78/dolphin-2.7-mixtral-8x7b-3.5bpw-h6-exl2-Q8_0-GGUF
This model was converted to GGUF format from [`LoneStriker/dolphin-2.7-mixtral-8x7b-3.5bpw-h6-exl2`](https://huggingface.co/LoneStriker/dolphin-2.7-mixtral-8x7b-3.5bpw-h6-exl2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/LoneStriker/dolphin-2.7-mixtral-8x7b-3.5bpw-h6-exl2) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo bendavis78/dolphin-2.7-mixtral-8x7b-3.5bpw-h6-exl2-Q8_0-GGUF --hf-file dolphin-2.7-mixtral-8x7b-3.5bpw-h6-exl2-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo bendavis78/dolphin-2.7-mixtral-8x7b-3.5bpw-h6-exl2-Q8_0-GGUF --hf-file dolphin-2.7-mixtral-8x7b-3.5bpw-h6-exl2-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo bendavis78/dolphin-2.7-mixtral-8x7b-3.5bpw-h6-exl2-Q8_0-GGUF --hf-file dolphin-2.7-mixtral-8x7b-3.5bpw-h6-exl2-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo bendavis78/dolphin-2.7-mixtral-8x7b-3.5bpw-h6-exl2-Q8_0-GGUF --hf-file dolphin-2.7-mixtral-8x7b-3.5bpw-h6-exl2-q8_0.gguf -c 2048
```
|
vdavidr/Artigenz-Coder-DS-6.7B_En__translations_size_104_epochs_10_2024-06-22_03-26-15_3557997 | vdavidr | "2024-06-22T03:46:45Z" | 0 | 0 | null | [
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:Artigenz/Artigenz-Coder-DS-6.7B",
"license:other",
"region:us"
] | null | "2024-06-22T00:27:01Z" | ---
license: other
base_model: Artigenz/Artigenz-Coder-DS-6.7B
tags:
- generated_from_trainer
metrics:
- accuracy
- bleu
- sacrebleu
- rouge
model-index:
- name: Artigenz-Coder-DS-6.7B_En__translations_size_104_epochs_10_2024-06-22_03-26-15_3557997
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Artigenz-Coder-DS-6.7B_En__translations_size_104_epochs_10_2024-06-22_03-26-15_3557997
This model is a fine-tuned version of [Artigenz/Artigenz-Coder-DS-6.7B](https://huggingface.co/Artigenz/Artigenz-Coder-DS-6.7B) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1141
- Accuracy: 0.06
- Chrf: 0.499
- Bleu: 0.407
- Sacrebleu: 0.4
- Rouge1: 0.494
- Rouge2: 0.242
- Rougel: 0.449
- Rougelsum: 0.488
- Meteor: 0.401
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 3407
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 4
- total_eval_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-06
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 104
- training_steps: 1040
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Chrf | Bleu | Sacrebleu | Rouge1 | Rouge2 | Rougel | Rougelsum | Meteor |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-----:|:-----:|:---------:|:------:|:------:|:------:|:---------:|:------:|
| 0.1365 | 4.0 | 104 | 1.1838 | 0.046 | 0.714 | 0.6 | 0.6 | 0.676 | 0.459 | 0.613 | 0.668 | 0.522 |
| 0.1026 | 8.0 | 208 | 1.3421 | 0.045 | 0.699 | 0.569 | 0.6 | 0.66 | 0.437 | 0.601 | 0.648 | 0.482 |
| 0.1001 | 12.0 | 312 | 1.3957 | 0.047 | 0.724 | 0.621 | 0.6 | 0.701 | 0.482 | 0.63 | 0.685 | 0.528 |
| 0.4589 | 16.0 | 416 | 1.6948 | 0.046 | 0.702 | 0.601 | 0.6 | 0.694 | 0.473 | 0.62 | 0.681 | 0.51 |
| 0.1812 | 20.0 | 520 | 2.5671 | 0.077 | 0.59 | 0.47 | 0.5 | 0.605 | 0.346 | 0.526 | 0.591 | 0.403 |
| 0.1966 | 24.0 | 624 | 2.5118 | 0.066 | 0.607 | 0.502 | 0.5 | 0.607 | 0.357 | 0.544 | 0.601 | 0.428 |
| 0.9528 | 28.0 | 728 | 2.7303 | 0.055 | 0.567 | 0.465 | 0.5 | 0.577 | 0.325 | 0.52 | 0.567 | 0.429 |
| 0.2147 | 32.0 | 832 | 2.9680 | 0.055 | 0.529 | 0.435 | 0.4 | 0.541 | 0.285 | 0.489 | 0.533 | 0.402 |
| 0.367 | 36.0 | 936 | 3.1490 | 0.067 | 0.508 | 0.417 | 0.4 | 0.516 | 0.264 | 0.469 | 0.509 | 0.392 |
| 0.2157 | 40.0 | 1040 | 3.1141 | 0.06 | 0.499 | 0.407 | 0.4 | 0.494 | 0.242 | 0.449 | 0.488 | 0.401 |
### Framework versions
- Transformers 4.37.0
- Pytorch 2.2.1+cu121
- Datasets 2.20.0
- Tokenizers 0.15.2
|
kongj/kongjij | kongj | "2024-06-22T00:28:39Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-22T00:28:39Z" | Entry not found |
sperfu/EyeDoc | sperfu | "2024-06-22T00:37:53Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-22T00:37:53Z" | Entry not found |
vdavidr/llama-7b-finnish-instruct-v0.2_En__translations_size_104_epochs_10_2024-06-22_03-40-15_3557998 | vdavidr | "2024-06-22T06:34:48Z" | 0 | 0 | null | [
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:Finnish-NLP/llama-7b-finnish-instruct-v0.2",
"license:apache-2.0",
"region:us"
] | null | "2024-06-22T00:43:08Z" | ---
license: apache-2.0
base_model: Finnish-NLP/llama-7b-finnish-instruct-v0.2
tags:
- generated_from_trainer
metrics:
- accuracy
- bleu
- sacrebleu
- rouge
model-index:
- name: llama-7b-finnish-instruct-v0.2_En__translations_size_104_epochs_10_2024-06-22_03-40-15_3557998
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama-7b-finnish-instruct-v0.2_En__translations_size_104_epochs_10_2024-06-22_03-40-15_3557998
This model is a fine-tuned version of [Finnish-NLP/llama-7b-finnish-instruct-v0.2](https://huggingface.co/Finnish-NLP/llama-7b-finnish-instruct-v0.2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5080
- Accuracy: 0.574
- Chrf: 0.692
- Bleu: 0.544
- Sacrebleu: 0.5
- Rouge1: 0.598
- Rouge2: 0.374
- Rougel: 0.581
- Rougelsum: 0.585
- Meteor: 0.465
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 3407
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 4
- total_eval_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-06
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 104
- training_steps: 1040
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Chrf | Bleu | Sacrebleu | Rouge1 | Rouge2 | Rougel | Rougelsum | Meteor |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-----:|:-----:|:---------:|:------:|:------:|:------:|:---------:|:------:|
| 0.4205 | 4.0 | 104 | 1.0350 | 0.577 | 0.44 | 0.277 | 0.3 | 0.317 | 0.149 | 0.288 | 0.297 | 0.355 |
| 0.2068 | 8.0 | 208 | 0.9393 | 0.578 | 0.501 | 0.338 | 0.3 | 0.391 | 0.206 | 0.357 | 0.353 | 0.362 |
| 0.1289 | 12.0 | 312 | 0.8516 | 0.575 | 0.548 | 0.38 | 0.4 | 0.439 | 0.225 | 0.414 | 0.416 | 0.395 |
| 0.7509 | 16.0 | 416 | 0.7754 | 0.578 | 0.578 | 0.42 | 0.4 | 0.461 | 0.259 | 0.441 | 0.44 | 0.431 |
| 0.0951 | 20.0 | 520 | 0.7348 | 0.574 | 0.606 | 0.459 | 0.5 | 0.513 | 0.291 | 0.484 | 0.483 | 0.446 |
| 0.1201 | 24.0 | 624 | 0.6261 | 0.577 | 0.637 | 0.484 | 0.5 | 0.541 | 0.321 | 0.516 | 0.505 | 0.437 |
| 1.0287 | 28.0 | 728 | 0.5589 | 0.574 | 0.668 | 0.51 | 0.5 | 0.559 | 0.334 | 0.535 | 0.51 | 0.458 |
| 0.0522 | 32.0 | 832 | 0.5623 | 0.574 | 0.659 | 0.526 | 0.5 | 0.573 | 0.349 | 0.552 | 0.541 | 0.449 |
| 0.1314 | 36.0 | 936 | 0.5154 | 0.576 | 0.682 | 0.534 | 0.5 | 0.588 | 0.36 | 0.573 | 0.576 | 0.465 |
| 0.0449 | 40.0 | 1040 | 0.5080 | 0.574 | 0.692 | 0.544 | 0.5 | 0.598 | 0.374 | 0.581 | 0.585 | 0.465 |
### Framework versions
- Transformers 4.37.0
- Pytorch 2.2.1+cu121
- Datasets 2.20.0
- Tokenizers 0.15.2
|
sjunique/results_split_1 | sjunique | "2024-06-22T00:46:51Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-22T00:46:51Z" | Entry not found |
Coolwowsocoolwow/Wii_Party_U_Announcer | Coolwowsocoolwow | "2024-06-22T00:50:30Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2024-06-22T00:48:57Z" | ---
license: openrail
---
|
aryarajput/Hii | aryarajput | "2024-06-22T00:50:31Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-22T00:50:31Z" | Entry not found |
MikuChan/JD1 | MikuChan | "2024-06-22T00:57:13Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-22T00:57:13Z" | Entry not found |
C0ttontheBunny/DarkTama | C0ttontheBunny | "2024-06-22T01:07:16Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2024-06-22T01:06:57Z" | ---
license: openrail
---
|
yogsth0t/czme1 | yogsth0t | "2024-06-22T01:09:02Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2024-06-22T01:09:02Z" | ---
license: apache-2.0
---
|
yraziel/ray_william_johnson | yraziel | "2024-06-22T01:16:01Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-22T01:13:07Z" | Entry not found |
channo39mz2/whisper-small-dv | channo39mz2 | "2024-06-22T01:17:17Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-22T01:17:17Z" | Entry not found |
to100mak/Llama-3-Open-Ko-8B-Instruct-to100mak | to100mak | "2024-06-22T01:38:27Z" | 0 | 0 | transformers | [
"transformers",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:beomi/Llama-3-Open-Ko-8B-Instruct-preview",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-22T01:27:46Z" | ---
base_model: beomi/Llama-3-Open-Ko-8B-Instruct-preview
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** to100mak
- **License:** apache-2.0
- **Finetuned from model :** beomi/Llama-3-Open-Ko-8B-Instruct-preview
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
manbeast3b/KinoInferTry1 | manbeast3b | "2024-06-22T01:28:28Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-22T01:28:24Z" | Entry not found |
lahcen001/nana | lahcen001 | "2024-06-22T01:29:34Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-22T01:29:34Z" | Entry not found |
chickenrice0721/whisper-large-v3-translate-zh-v0.1-lt | chickenrice0721 | "2024-06-22T03:22:37Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"audio",
"translate",
"generated_from_trainer",
"zh",
"base_model:openai/whisper-large-v3",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-06-22T01:30:40Z" | ---
license: apache-2.0
base_model: openai/whisper-large-v3
tags:
- audio
- automatic-speech-recognition
- translate
- generated_from_trainer
language:
- zh
metrics:
- cer
- wer
model-index:
- name: whisper-large-v3-translate-zh-v0.1-lt
results: []
pipeline_tag: automatic-speech-recognition
---
# whisper-large-v3-translate-zh-v0.1-lt
This model is a fine-tuned version of [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3).
## Model description
3500小时 (日语音频,中文字幕) 数据微调, 翻译模式直出中文
## Usage
task='translate', language='ja'
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 4000
- dropout: 0.1
- mask_time_prob: 0.05
- mask_feature_prob: 0.2
- condition_on_previous_text_rate: 0.5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer | Wer |
|:-------------:|:------:|:-----:|:---------------:|:------:|:------:|
| 2.1282 | 0.0739 | 1000 | 2.1852 | 1.9014 | 4.4904 |
| 1.8567 | 0.1478 | 2000 | 1.8366 | 1.7295 | 3.8716 |
| 1.6968 | 0.2217 | 3000 | 1.2615 | 1.6279 | 2.4825 |
| 1.6264 | 0.2956 | 4000 | 1.0536 | 1.5625 | 1.8101 |
| 1.5687 | 0.3695 | 5000 | 1.0932 | 1.5410 | 2.1218 |
| 1.531 | 0.4433 | 6000 | 1.5156 | 1.2533 | 2.3689 |
| 1.4875 | 0.5172 | 7000 | 1.4697 | 0.9560 | 1.5588 |
| 1.4518 | 0.5911 | 8000 | 1.4521 | 1.0170 | 1.6392 |
| 1.4472 | 0.6650 | 9000 | 1.4463 | 1.0084 | 1.6420 |
| 1.3991 | 0.7389 | 10000 | 1.4238 | 0.9266 | 1.6992 |
| 1.4266 | 0.8128 | 11000 | 1.4141 | 0.8365 | 1.3056 |
| 1.3755 | 0.8867 | 12000 | 1.4033 | 0.7904 | 1.3119 |
| 1.3833 | 0.9606 | 13000 | 1.4004 | 0.8600 | 1.3333 |
| 1.3224 | 1.0345 | 14000 | 1.3770 | 0.8243 | 1.4560 |
| 1.3295 | 1.1084 | 15000 | 1.3770 | 0.7852 | 1.4298 |
| 1.3136 | 1.1823 | 16000 | 1.3564 | 0.7176 | 1.1826 |
| 1.2832 | 1.2561 | 17000 | 1.3535 | 0.6767 | 1.1781 |
| 1.2917 | 1.3300 | 18000 | 1.3584 | 0.7255 | 1.1218 |
| 1.27 | 1.4039 | 19000 | 1.3330 | 0.6590 | 1.1242 |
| 1.2704 | 1.4778 | 20000 | 1.3379 | 0.6934 | 1.1944 |
| 1.2614 | 1.5517 | 21000 | 1.3330 | 0.6949 | 1.1820 |
| 1.2455 | 1.6256 | 22000 | 1.3350 | 0.6931 | 1.0892 |
| 1.2475 | 1.6995 | 23000 | 1.3154 | 0.6662 | 1.1576 |
| 1.2583 | 1.7734 | 24000 | 1.3164 | 0.6490 | 1.0705 |
| 1.2333 | 1.8473 | 25000 | 1.3184 | 0.6833 | 1.1480 |
| 1.2462 | 1.9212 | 26000 | 1.3125 | 0.6672 | 1.1612 |
| 1.2279 | 1.9950 | 27000 | 1.3047 | 0.6644 | 1.2179 |
| 1.1908 | 2.0689 | 28000 | 1.3047 | 0.6938 | 1.2221 |
| 1.1831 | 2.1428 | 29000 | 1.2998 | 0.6316 | 1.0717 |
| 1.1705 | 2.2167 | 30000 | 1.3018 | 0.6165 | 1.0958 |
| 1.171 | 2.2906 | 31000 | 1.3027 | 0.6109 | 1.0868 |
| 1.1567 | 2.3645 | 32000 | 1.3037 | 0.6485 | 1.1736 |
| 1.1705 | 2.4384 | 33000 | 1.2969 | 0.6078 | 1.0515 |
| 1.1819 | 2.5123 | 34000 | 1.2949 | 0.6158 | 1.0362 |
| 1.1447 | 2.5862 | 35000 | 1.2920 | 0.6365 | 1.0558 |
| 1.17 | 2.6601 | 36000 | 1.2881 | 0.6339 | 1.0868 |
| 1.1495 | 2.7340 | 37000 | 1.2949 | 0.6297 | 1.0437 |
| 1.1395 | 2.8078 | 38000 | 1.2900 | 0.6285 | 1.1221 |
| 1.15 | 2.8817 | 39000 | 1.2891 | 0.5997 | 1.0217 |
| 1.1623 | 2.9556 | 40000 | 1.2881 | 0.6085 | 1.0395 |
### Framework versions
- Transformers 4.41.1
- Pytorch 2.3.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1 |
AleDiazT/finetuning-emotion-model-5 | AleDiazT | "2024-06-22T01:43:24Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-06-22T01:31:38Z" | Entry not found |
howarudo/paligemma-ft | howarudo | "2024-06-22T01:38:13Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-22T01:38:13Z" | Entry not found |
SimoLM/final_model | SimoLM | "2024-06-22T01:49:22Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/phi-3-medium-4k-instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-22T01:48:53Z" | ---
base_model: unsloth/phi-3-medium-4k-instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
---
# Uploaded model
- **Developed by:** tferdi
- **License:** apache-2.0
- **Finetuned from model :** unsloth/phi-3-medium-4k-instruct-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
inflaton/Qwen2-1.5B-Instruct-bnb-4bit-MAC-merged_4bit_forced | inflaton | "2024-06-22T01:51:00Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/Qwen2-1.5B-Instruct-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-06-22T01:49:08Z" | ---
base_model: unsloth/Qwen2-1.5B-Instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- sft
---
# Uploaded model
- **Developed by:** inflaton
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2-1.5B-Instruct-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
magnifi/parser_user_v8-0621-epoch7-0.002_nosystemprompt | magnifi | "2024-06-22T03:37:50Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/Phi-3-mini-4k-instruct-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-06-22T01:50:42Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
base_model: unsloth/Phi-3-mini-4k-instruct-bnb-4bit
---
# Uploaded model
- **Developed by:** magnifi
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Phi-3-mini-4k-instruct-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
BakingBeans/Archive | BakingBeans | "2024-06-25T10:53:08Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-22T01:53:02Z" | Entry not found |
sandyyuan/galaxyfactorycropped | sandyyuan | "2024-06-22T21:13:54Z" | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"diffusers:DDPMPipeline",
"region:us"
] | null | "2024-06-22T01:54:00Z" | Entry not found |
darylsilva/mimimal | darylsilva | "2024-06-23T05:44:18Z" | 0 | 1 | null | [
"license:apache-2.0",
"region:us"
] | null | "2024-06-22T01:57:55Z" | ---
license: apache-2.0
---
|
Hakaijuxd/Softvoice | Hakaijuxd | "2024-06-22T02:30:00Z" | 0 | 0 | null | [
"onnx",
"region:us"
] | null | "2024-06-22T02:04:51Z" | Entry not found |
AroAITeam/Mouhu-0.1-v0.0.1 | AroAITeam | "2024-06-22T04:52:32Z" | 0 | 0 | null | [
"text-generation",
"ja",
"en",
"license:apache-2.0",
"region:us"
] | text-generation | "2024-06-22T02:05:18Z" | ---
license: apache-2.0
language:
- ja
- en
pipeline_tag: text-generation
--- |
MsgmSgmsG/llama-3-8b-rt-01 | MsgmSgmsG | "2024-06-22T02:35:42Z" | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-06-22T02:07:12Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
bhaelen/example-model | bhaelen | "2024-06-22T02:25:49Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-22T02:13:33Z" | # Example Model
This is my model card readme
---
license: mit
---
|
wahoong/llamav3-8b-unsloth-v2 | wahoong | "2024-06-22T02:16:12Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-06-22T02:13:50Z" | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
Unsloth llamav3-8b stripped decompiled code to C source code
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
magnifi/parser_user_v8-0621-epoch8-0.002_nosystemprompt | magnifi | "2024-06-22T02:17:01Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/Phi-3-mini-4k-instruct-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-06-22T02:15:00Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
base_model: unsloth/Phi-3-mini-4k-instruct-bnb-4bit
---
# Uploaded model
- **Developed by:** magnifi
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Phi-3-mini-4k-instruct-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Aptronym/LowStepLoras | Aptronym | "2024-06-22T03:58:42Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-22T02:15:35Z" | Entry not found |
VDBLOI2024/VDBLOI-AI | VDBLOI2024 | "2024-06-22T02:16:21Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-22T02:16:21Z" | Entry not found |
MsgmSgmsG/llama-3-8b-rt-00 | MsgmSgmsG | "2024-06-22T02:19:41Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-22T02:19:41Z" | Entry not found |
ErikGG64/Kurt_Cobain_Talking | ErikGG64 | "2024-06-22T02:20:46Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2024-06-22T02:20:25Z" | ---
license: openrail
---
|
DFofanov78/llama-3-8b-Instruct-bnb-4bit | DFofanov78 | "2024-06-22T11:45:55Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"unsloth",
"llama-3",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-06-22T02:22:07Z" | ---
language:
- en
license: apache-2.0
library_name: transformers
tags:
- unsloth
- transformers
- llama
- llama-3
---
# Finetune Mistral, Gemma, Llama 2-5x faster with 70% less memory via Unsloth!
Directly quantized 4bit model with `bitsandbytes`.
We have a Google Colab Tesla T4 notebook for Llama-3 8b here: https://colab.research.google.com/drive/135ced7oHytdxu3N2DNe1Z0kqjyYIkDXp?usp=sharing
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/Discord%20button.png" width="200"/>](https://discord.gg/u54VK8m8tk)
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/buy%20me%20a%20coffee%20button.png" width="200"/>](https://ko-fi.com/unsloth)
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
## ✨ Finetune for Free
All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face.
| Unsloth supports | Free Notebooks | Performance | Memory use |
|-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------|
| **Llama-3 8b** | [▶️ Start on Colab](https://colab.research.google.com/drive/135ced7oHytdxu3N2DNe1Z0kqjyYIkDXp?usp=sharing) | 2.4x faster | 58% less |
| **Gemma 7b** | [▶️ Start on Colab](https://colab.research.google.com/drive/10NbwlsRChbma1v55m8LAPYG15uQv6HLo?usp=sharing) | 2.4x faster | 58% less |
| **Mistral 7b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_?usp=sharing) | 2.2x faster | 62% less |
| **Llama-2 7b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1lBzz5KeZJKXjvivbYvmGarix9Ao6Wxe5?usp=sharing) | 2.2x faster | 43% less |
| **TinyLlama** | [▶️ Start on Colab](https://colab.research.google.com/drive/1AZghoNBQaMDgWJpi4RbffGM1h6raLUj9?usp=sharing) | 3.9x faster | 74% less |
| **CodeLlama 34b** A100 | [▶️ Start on Colab](https://colab.research.google.com/drive/1y7A0AxE3y8gdj4AVkl2aZX47Xu3P1wJT?usp=sharing) | 1.9x faster | 27% less |
| **Mistral 7b** 1xT4 | [▶️ Start on Kaggle](https://www.kaggle.com/code/danielhanchen/kaggle-mistral-7b-unsloth-notebook) | 5x faster\* | 62% less |
| **DPO - Zephyr** | [▶️ Start on Colab](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) | 1.9x faster | 19% less |
- This [conversational notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing) is useful for ShareGPT ChatML / Vicuna templates.
- This [text completion notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing) is for raw text. This [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr.
- \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster. |
fabrimath/wav2vec2-base-finetuned-ks | fabrimath | "2024-06-22T05:24:35Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"audio-classification",
"endpoints_compatible",
"region:us"
] | audio-classification | "2024-06-22T02:25:10Z" | Entry not found |
cycy233/tttt | cycy233 | "2024-06-22T02:26:15Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-22T02:26:15Z" | Entry not found |
b-fujino/LUM_bfloat16 | b-fujino | "2024-06-22T02:37:44Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-06-22T02:29:32Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
allstrives/simple-model | allstrives | "2024-06-22T05:16:15Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-22T02:32:35Z" | Entry not found |
FurnTheFurnace/Applio-Kaggle | FurnTheFurnace | "2024-06-22T04:16:49Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-22T02:33:33Z" | <h1 align="center">
<a href="https://applio.org" target="_blank"><img src="https://github.com/IAHispano/Applio/assets/133521603/78e975d8-b07f-47ba-ab23-5a31592f322a" alt="Applio"></a>
</h1>
<p align="center">
<a href="https://github.com/IAHispano/Applio/graphs/contributors">
<img alt="Contributors" src="https://img.shields.io/github/contributors/iahispano/applio?style=for-the-badge&color=FFFFFF" />
</a>
<a href="https://github.com/IAHispano/Applio/releases/tag/3.2.0">
<img alt="Release" src="https://img.shields.io/github/release/iahispano/applio?style=for-the-badge&color=FFFFFF" />
</a>
<a href="https://github.com/IAHispano/Applio">
<img alt="Stars" src="https://img.shields.io/github/stars/iahispano/applio?style=for-the-badge&color=FFFFFF" />
</a>
<a href="https://github.com/IAHispano/Applio">
<img alt="Fork" src="https://img.shields.io/github/forks/iahispano/applio?style=for-the-badge&color=FFFFFF" />
</a>
<a href="https://github.com/IAHispano/Applio/issues">
<img alt="Issues" src="https://img.shields.io/github/issues/iahispano/applio?style=for-the-badge&color=FFFFFF" />
</a>
<a href="https://cdn-uploads.huggingface.co/production/uploads/652c5d34ec10d7e4810a0513/AUlwXRIMhOSfKG6al-hm0.png">
<img alt="Kaggle" src="https://camo.githubusercontent.com/74f4165a70cf43b25e5e8df17a045426ddbf698b625a7c71c0a6412daa4eb011/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f4b6167676c652d3033356137643f7374796c653d666f722d7468652d6261646765266c6f676f3d6b6167676c65266c6f676f436f6c6f723d7768697465" />
</a>
</p>
<p align="center">
VITS-based Voice Conversion focused on simplicity, quality, and performance. Originally created by <a href="https://huggingface.co/blaise-tk">Blaise.</a>
</p>
<p align="center">
This is a upcoming Kaggle version of Applio made by
<a href="https://discordapp.com/users/989772388508000306">Vidal</a>
and
<a href="https://discordapp.com/users/984567398826917918">Mantrax</a>. We kindly ask you to please not sending DMs to both of us about the link, there's no link until it's complete and ready to be public.
</p>
|
Knowtex-ai/Oncology-Model-4bit-Llama3 | Knowtex-ai | "2024-06-22T02:42:02Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-22T02:41:56Z" | ---
base_model: unsloth/llama-3-8b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** kalycodes
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
casque/Swimming_Lesson_6_v1 | casque | "2024-06-22T02:49:27Z" | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2024-06-22T02:47:56Z" | ---
license: creativeml-openrail-m
---
|
bbyxinnocenz/RBN_HSR | bbyxinnocenz | "2024-06-22T02:51:59Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-22T02:51:27Z" | Entry not found |
howarudo/gemma-demo-vqa-ft | howarudo | "2024-06-22T02:52:34Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-22T02:52:34Z" | Entry not found |
bbyxinnocenz/RBN_CHVY | bbyxinnocenz | "2024-06-22T02:52:53Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-22T02:52:39Z" | Entry not found |
Rickliou/gemma-medical_qa-Finetune | Rickliou | "2024-06-22T02:55:53Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-06-22T02:53:18Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
vdavidr/deepseek-coder-6.7b-instruct_En__translations_size_104_epochs_10_2024-06-22_06-06-55_3557999 | vdavidr | "2024-06-22T06:27:46Z" | 0 | 0 | null | [
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:deepseek-ai/deepseek-coder-6.7b-instruct",
"license:other",
"region:us"
] | null | "2024-06-22T03:07:43Z" | ---
license: other
base_model: deepseek-ai/deepseek-coder-6.7b-instruct
tags:
- generated_from_trainer
metrics:
- accuracy
- bleu
- sacrebleu
- rouge
model-index:
- name: deepseek-coder-6.7b-instruct_En__translations_size_104_epochs_10_2024-06-22_06-06-55_3557999
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deepseek-coder-6.7b-instruct_En__translations_size_104_epochs_10_2024-06-22_06-06-55_3557999
This model is a fine-tuned version of [deepseek-ai/deepseek-coder-6.7b-instruct](https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-instruct) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1144
- Accuracy: 0.047
- Chrf: 0.5
- Bleu: 0.39
- Sacrebleu: 0.4
- Rouge1: 0.501
- Rouge2: 0.239
- Rougel: 0.448
- Rougelsum: 0.497
- Meteor: 0.413
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 3407
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 4
- total_eval_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-06
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 104
- training_steps: 1040
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Chrf | Bleu | Sacrebleu | Rouge1 | Rouge2 | Rougel | Rougelsum | Meteor |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-----:|:-----:|:---------:|:------:|:------:|:------:|:---------:|:------:|
| 0.1338 | 4.0 | 104 | 1.1433 | 0.033 | 0.731 | 0.594 | 0.6 | 0.675 | 0.459 | 0.617 | 0.668 | 0.515 |
| 0.1034 | 8.0 | 208 | 1.2599 | 0.03 | 0.723 | 0.594 | 0.6 | 0.687 | 0.462 | 0.611 | 0.674 | 0.495 |
| 0.0974 | 12.0 | 312 | 1.3238 | 0.037 | 0.739 | 0.63 | 0.6 | 0.719 | 0.505 | 0.638 | 0.7 | 0.545 |
| 0.6468 | 16.0 | 416 | 1.9216 | 0.042 | 0.664 | 0.547 | 0.5 | 0.652 | 0.409 | 0.567 | 0.64 | 0.458 |
| 0.1566 | 20.0 | 520 | 2.1866 | 0.045 | 0.656 | 0.538 | 0.5 | 0.657 | 0.428 | 0.581 | 0.649 | 0.494 |
| 0.2056 | 24.0 | 624 | 2.5536 | 0.042 | 0.585 | 0.468 | 0.5 | 0.586 | 0.334 | 0.527 | 0.58 | 0.441 |
| 0.9677 | 28.0 | 728 | 2.8086 | 0.051 | 0.561 | 0.455 | 0.5 | 0.565 | 0.314 | 0.509 | 0.559 | 0.417 |
| 0.2163 | 32.0 | 832 | 2.9769 | 0.044 | 0.532 | 0.42 | 0.4 | 0.523 | 0.26 | 0.475 | 0.517 | 0.412 |
| 0.3617 | 36.0 | 936 | 3.1040 | 0.052 | 0.502 | 0.392 | 0.4 | 0.496 | 0.239 | 0.447 | 0.492 | 0.427 |
| 0.2153 | 40.0 | 1040 | 3.1144 | 0.047 | 0.5 | 0.39 | 0.4 | 0.501 | 0.239 | 0.448 | 0.497 | 0.413 |
### Framework versions
- Transformers 4.37.0
- Pytorch 2.2.1+cu121
- Datasets 2.20.0
- Tokenizers 0.15.2
|
ranggaaldosas/bart_large_cnn_with_multinews | ranggaaldosas | "2024-06-22T06:09:10Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2024-06-22T03:22:29Z" | ---
license: apache-2.0
---
|
loeol/Llama-3-8b-BFI-Anonymous | loeol | "2024-06-22T04:17:49Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-06-22T03:22:58Z" | ---
license: llama3
---
|