modelId
stringlengths 5
122
| author
stringlengths 2
42
| last_modified
unknown | downloads
int64 0
738M
| likes
int64 0
11k
| library_name
stringclasses 245
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 48
values | createdAt
unknown | card
stringlengths 1
901k
|
---|---|---|---|---|---|---|---|---|---|
HazeSolsa/kagaminerin | HazeSolsa | "2024-06-23T07:50:34Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-23T07:45:53Z" | Entry not found |
konishant/testing | konishant | "2024-06-23T20:40:14Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-06-23T07:55:02Z" | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
excalibur12/las_asr-scr_w2v2-base_002 | excalibur12 | "2024-06-23T09:53:50Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"generated_from_trainer",
"base_model:facebook/wav2vec2-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-23T07:55:38Z" | ---
license: apache-2.0
base_model: facebook/wav2vec2-base
tags:
- generated_from_trainer
model-index:
- name: las_asr-scr_w2v2-base_002
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# las_asr-scr_w2v2-base_002
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4121
- Per: 0.1605
- Pcc: 0.7256
- Ctc Loss: 0.4686
- Mse Loss: 1.0317
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 1
- seed: 2222
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 742
- training_steps: 7420
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Per | Pcc | Ctc Loss | Mse Loss |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:--------:|:--------:|
| 12.6219 | 1.0 | 742 | 5.8749 | 0.9897 | 0.5612 | 3.7480 | 2.3327 |
| 3.9885 | 2.0 | 1484 | 2.0871 | 0.3274 | 0.6986 | 1.2456 | 0.8803 |
| 1.7464 | 3.0 | 2226 | 1.5832 | 0.1972 | 0.7004 | 0.6288 | 0.9002 |
| 1.2583 | 4.0 | 2968 | 1.8791 | 0.1858 | 0.7250 | 0.5608 | 1.2261 |
| 0.9058 | 5.0 | 3710 | 1.9793 | 0.1736 | 0.7355 | 0.5207 | 1.3244 |
| 0.5912 | 6.0 | 4452 | 1.4470 | 0.1712 | 0.7297 | 0.4956 | 0.9576 |
| 0.2949 | 7.0 | 5194 | 1.3030 | 0.1670 | 0.7310 | 0.4880 | 0.8993 |
| 0.0454 | 8.0 | 5936 | 2.0771 | 0.1645 | 0.7234 | 0.4762 | 1.4066 |
| -0.1676 | 9.0 | 6678 | 1.3317 | 0.1616 | 0.7278 | 0.4743 | 0.9749 |
| -0.2955 | 10.0 | 7420 | 1.4121 | 0.1605 | 0.7256 | 0.4686 | 1.0317 |
### Framework versions
- Transformers 4.38.1
- Pytorch 2.0.1
- Datasets 2.16.1
- Tokenizers 0.15.2
|
Danikdsa/jisoo | Danikdsa | "2024-06-23T08:12:42Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2024-06-23T08:00:01Z" | ---
license: openrail
---
|
azurehorizon/gemma-2b-it-Code-Instruct-ft-122k_alpaca_style | azurehorizon | "2024-06-23T08:20:27Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-06-23T08:00:08Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
Instruct Fine-tuning Gemma using qLora and Supervise Finetuning
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:**
- **Funded by [optional]:** azurehorizon
- **Model type:** gemma-2b-it
- **Language(s) (NLP):** English,
- **License:** gemma
- **Finetuned from model [optional]:** gemma-2b-it
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
TokenBender/code_instructions_122k_alpaca_style
#### Summary
'global_step'=100,
'train_samples_per_second': 0.501,
'train_steps_per_second': 0.125,
'total_flos': 907555205713920.0,
'train_loss': 0.99025949716568,
- **Hardware Type:** T4 GPU
- **Hours used:** 0:13 H
- **Cloud Provider:** Google Colab
|
rs545837/speecht5_finetuned_voxpopuli_nl_lora | rs545837 | "2024-06-23T08:05:13Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-23T08:05:13Z" | Entry not found |
webonxd/Tony_VC | webonxd | "2024-06-23T08:08:38Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-23T08:08:33Z" | Entry not found |
ahnilforoosh/1234 | ahnilforoosh | "2024-06-23T08:09:49Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-23T08:09:49Z" | Entry not found |
screenmate/minicpm_22.06-2000 | screenmate | "2024-06-23T08:14:07Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:openbmb/MiniCPM-Llama3-V-2_5",
"region:us"
] | null | "2024-06-23T08:11:59Z" | ---
base_model: openbmb/MiniCPM-Llama3-V-2_5
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.11.1 |
Rakif215/fist_model | Rakif215 | "2024-06-23T09:16:47Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-2-7b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-23T08:18:27Z" | ---
base_model: unsloth/llama-2-7b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** Rakif215
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-2-7b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Panoramax/detect_face_plate_sign | Panoramax | "2024-06-23T08:24:51Z" | 0 | 0 | null | [
"object-detection",
"license:etalab-2.0",
"region:us"
] | object-detection | "2024-06-23T08:21:52Z" | ---
license: etalab-2.0
pipeline_tag: object-detection
--- |
junannn/llama3-8b-cosmic-fusion-dynamics-merged_4bit-vllm | junannn | "2024-06-23T08:27:27Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-06-23T08:23:44Z" | ---
base_model: unsloth/llama-3-8b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
---
# Uploaded model
- **Developed by:** junannn
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
tejasexpress/hf_ternary_1.1B_100B | tejasexpress | "2024-06-23T08:25:36Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-23T08:25:36Z" | Entry not found |
ZhZhPeng/3f_safe_draft0 | ZhZhPeng | "2024-06-23T08:34:15Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-06-23T08:26:39Z" | Entry not found |
itay-nakash/model_2ec771cb72_sweep_gallant-surf-825 | itay-nakash | "2024-06-23T08:27:31Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-23T08:27:31Z" | Entry not found |
ademaydogdu/my_awesome_qa_model | ademaydogdu | "2024-06-23T08:29:40Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-23T08:29:40Z" | Entry not found |
MarcusUniversee/llama_3b-health-qa | MarcusUniversee | "2024-06-23T08:32:04Z" | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | "2024-06-23T08:32:04Z" | ---
license: mit
---
|
ikocemayy13938/yeeunmodel | ikocemayy13938 | "2024-06-23T09:08:42Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2024-06-23T08:32:16Z" | ---
license: openrail
---
|
sosuke/preference_tuning_results | sosuke | "2024-06-23T08:34:30Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"trl",
"dpo",
"generated_from_trainer",
"base_model:llm-book/Swallow-7b-hf-oasst1-21k-ja",
"region:us"
] | null | "2024-06-23T08:33:00Z" | ---
base_model: llm-book/Swallow-7b-hf-oasst1-21k-ja
library_name: peft
tags:
- trl
- dpo
- generated_from_trainer
model-index:
- name: preference_tuning_results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# preference_tuning_results
This model is a fine-tuned version of [llm-book/Swallow-7b-hf-oasst1-21k-ja](https://huggingface.co/llm-book/Swallow-7b-hf-oasst1-21k-ja) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6610
- Rewards/chosen: -0.1479
- Rewards/rejected: -0.2665
- Rewards/accuracies: 0.5917
- Rewards/margins: 0.1186
- Logps/rejected: -146.9710
- Logps/chosen: -134.8070
- Logits/rejected: 0.3116
- Logits/chosen: 0.3255
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.6935 | 0.0337 | 50 | 0.6908 | 0.0025 | -0.0026 | 0.5417 | 0.0050 | -144.3320 | -133.3038 | 0.1607 | 0.1710 |
| 0.6936 | 0.0673 | 100 | 0.6915 | 0.0016 | -0.0021 | 0.5750 | 0.0037 | -144.3277 | -133.3129 | 0.1674 | 0.1783 |
| 0.6905 | 0.1010 | 150 | 0.6889 | 0.0026 | -0.0067 | 0.5167 | 0.0093 | -144.3729 | -133.3024 | 0.1746 | 0.1857 |
| 0.6891 | 0.1347 | 200 | 0.6886 | 0.0109 | 0.0007 | 0.5250 | 0.0102 | -144.2993 | -133.2191 | 0.1697 | 0.1812 |
| 0.6866 | 0.1684 | 250 | 0.6865 | 0.0219 | 0.0071 | 0.5917 | 0.0148 | -144.2358 | -133.1099 | 0.1783 | 0.1895 |
| 0.6851 | 0.2020 | 300 | 0.6826 | 0.0255 | 0.0020 | 0.6000 | 0.0234 | -144.2859 | -133.0740 | 0.1736 | 0.1853 |
| 0.6842 | 0.2357 | 350 | 0.6820 | 0.0240 | -0.0014 | 0.6083 | 0.0254 | -144.3206 | -133.0886 | 0.1721 | 0.1833 |
| 0.679 | 0.2694 | 400 | 0.6761 | 0.0333 | -0.0070 | 0.5750 | 0.0404 | -144.3764 | -132.9950 | 0.1766 | 0.1877 |
| 0.6814 | 0.3030 | 450 | 0.6741 | 0.0215 | -0.0244 | 0.5333 | 0.0459 | -144.5500 | -133.1130 | 0.1943 | 0.2060 |
| 0.674 | 0.3367 | 500 | 0.6693 | 0.0179 | -0.0423 | 0.5667 | 0.0602 | -144.7297 | -133.1494 | 0.2098 | 0.2217 |
| 0.6748 | 0.3704 | 550 | 0.6691 | -0.0133 | -0.0788 | 0.5583 | 0.0655 | -145.0942 | -133.4615 | 0.2477 | 0.2594 |
| 0.6673 | 0.4040 | 600 | 0.6615 | -0.0450 | -0.1350 | 0.6000 | 0.0899 | -145.6558 | -133.7786 | 0.3043 | 0.3172 |
| 0.6769 | 0.4377 | 650 | 0.6654 | -0.0385 | -0.1222 | 0.6000 | 0.0837 | -145.5283 | -133.7136 | 0.2800 | 0.2928 |
| 0.6677 | 0.4714 | 700 | 0.6643 | -0.0537 | -0.1442 | 0.6167 | 0.0905 | -145.7482 | -133.8651 | 0.2681 | 0.2808 |
| 0.675 | 0.5051 | 750 | 0.6596 | -0.0396 | -0.1394 | 0.6083 | 0.0998 | -145.7003 | -133.7247 | 0.2512 | 0.2644 |
| 0.6633 | 0.5387 | 800 | 0.6607 | -0.0756 | -0.1792 | 0.5833 | 0.1036 | -146.0984 | -134.0848 | 0.2626 | 0.2751 |
| 0.6661 | 0.5724 | 850 | 0.6603 | -0.0903 | -0.2000 | 0.6000 | 0.1097 | -146.3066 | -134.2316 | 0.2735 | 0.2861 |
| 0.6677 | 0.6061 | 900 | 0.6619 | -0.0994 | -0.2070 | 0.5750 | 0.1076 | -146.3762 | -134.3224 | 0.2735 | 0.2864 |
| 0.6614 | 0.6397 | 950 | 0.6615 | -0.1019 | -0.2104 | 0.5750 | 0.1084 | -146.4101 | -134.3480 | 0.2690 | 0.2818 |
| 0.6514 | 0.6734 | 1000 | 0.6610 | -0.1138 | -0.2245 | 0.6000 | 0.1107 | -146.5513 | -134.4665 | 0.2835 | 0.2963 |
| 0.6625 | 0.7071 | 1050 | 0.6602 | -0.1136 | -0.2259 | 0.5833 | 0.1124 | -146.5656 | -134.4642 | 0.2873 | 0.3006 |
| 0.6421 | 0.7407 | 1100 | 0.6610 | -0.1285 | -0.2408 | 0.5833 | 0.1122 | -146.7140 | -134.6137 | 0.2892 | 0.3024 |
| 0.6438 | 0.7744 | 1150 | 0.6585 | -0.1373 | -0.2590 | 0.5750 | 0.1217 | -146.8963 | -134.7020 | 0.3015 | 0.3152 |
| 0.6534 | 0.8081 | 1200 | 0.6603 | -0.1478 | -0.2671 | 0.5917 | 0.1192 | -146.9771 | -134.8070 | 0.3120 | 0.3259 |
| 0.653 | 0.8418 | 1250 | 0.6607 | -0.1460 | -0.2651 | 0.5917 | 0.1191 | -146.9573 | -134.7881 | 0.3120 | 0.3259 |
| 0.6667 | 0.8754 | 1300 | 0.6599 | -0.1475 | -0.2678 | 0.5917 | 0.1203 | -146.9841 | -134.8036 | 0.3108 | 0.3247 |
| 0.6596 | 0.9091 | 1350 | 0.6606 | -0.1452 | -0.2632 | 0.6000 | 0.1181 | -146.9385 | -134.7802 | 0.3114 | 0.3255 |
| 0.648 | 0.9428 | 1400 | 0.6614 | -0.1475 | -0.2644 | 0.6000 | 0.1169 | -146.9505 | -134.8035 | 0.3118 | 0.3258 |
| 0.641 | 0.9764 | 1450 | 0.6610 | -0.1479 | -0.2665 | 0.5917 | 0.1186 | -146.9710 | -134.8070 | 0.3116 | 0.3255 |
### Framework versions
- PEFT 0.11.1
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1 |
Vi-VLM/llava-vistral-7b-pretrain | Vi-VLM | "2024-06-30T09:44:22Z" | 0 | 0 | transformers | [
"transformers",
"llava_llama",
"text-generation",
"vision language model",
"vi",
"en",
"dataset:Vi-VLM/Vista",
"arxiv:2303.15343",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-06-23T08:33:14Z" | ---
license: apache-2.0
datasets:
- Vi-VLM/Vista
language:
- vi
- en
tags:
- vision language model
---
<p>
<a href="https://github.com/hllj/Vistral-V">Github</a> |
<a href="https://www.kaggle.com/code/hlly34/vistral-v-notebook">Inference Notebook</a> |
<a href="https://huggingface.co/datasets/Vi-VLM/Vista">Dataset</a> |
<a href="https://huggingface.co/collections/Vi-VLM/vista-668126169f4f7654f07cae66">Model Family</a>
</p>
## Model Details
We have developed and released the family of Vista 7B, which includes both a pretrained Projector and a finetuned version of the Vietnamese Vision Language Model (VLM). This model is optimized for image description tasks.
We continue to expand Vistral 7B's vision capabilities using the [Llava approach](https://github.com/haotian-liu/LLaVA), leveraging our proprietary [Vista dataset](https://huggingface.co/datasets/Vi-VLM/Vista) with [Siglip](https://arxiv.org/abs/2303.15343) as an image encoder.
> **Disclaimer**: The model has not been trained on OCR tasks and may perform poorly in OCR and graph analysis. Use with caution, as we have not focused on correcting the factual knowledge of the model.
**Model developers** Vi-VLM
**Input** Models input text and image.
**Output** Models generate image descriptions only.
**Model Architecture** Mistral.
## Intended Use
**Intended Use Cases** Vista is primarily intended for research applications within the Vietnamese context. This version aims to further improve the Vietnamese Vision Language Model capabilities.
**Out-of-scope** The use of Vista in any manner that violates applicable laws or regulations is strictly prohibited.
## How to use
### Use with Kaggle Notebook
To run inference using the model, follow the steps outlined in our [Kaggle Inference Notebook](https://www.kaggle.com/code/hlly34/vistral-v-notebook).
## Training process
**Training Metrics Image**: Below is a snapshot of the training metrics visualized.
![Training Metrics](https://cdn-uploads.huggingface.co/production/uploads/630a5ef0e81e1dea2cedcec0/rjf1SL3-o7IUBJerUmCDT.png)
**Weights & Biases**: Monitor the training progress and access additional analytics at our [WandB project page](https://wandb.ai/hllj/huggingface).
### Training Data
**Pretrained Model**:
- Dataset: ShareGPT4V and a subset of WIT from the [Vista dataset](https://huggingface.co/datasets/Vi-VLM/Vista).
**Finetuned Model**:
- Tasks:
- Conversation
- Complex reasoning
- Detailed description
- Dataset: Subset from the [Vista dataset](https://huggingface.co/datasets/Vi-VLM/Vista).
### Hardware
**GPU Configuration**: Cluster of 2x NVIDIA A100-SXM4-40GB, provided by Google Cloud Research and [VietAI](https://course.vietai.org/).
**GPU Usage**:
- **Pretrain**: 4 hours of GPU time.
- **Finetune**: 14 hours of GPU time.
### Training Arguments
| Parameter | Pretrain | Finetune (LoRA) |
|----------------------------|-------------------------|-------------------------------|
| **Epoch** | 1 | 1 |
| **Global batch size** | 16 | 16 |
| **Learning Scheduler** | cosine with warmup | cosine with warmup |
| **Optimizer** | AdamW | AdamW |
| **Warmup Ratio** | 0.03 | 0.03 |
| **Weight Decay** | 0.00 | 0.00 |
| **Learning rate (LLM)** | - | 1.25e-5 |
| **Learning rate (Projector)** | 1e-3 | 1.25e-6 |
| **rank** | - | 128 |
| **alpha** | - | 256 |
| **Target modules** | - | all linear layers |
## Examples
![image/png](https://cdn-uploads.huggingface.co/production/uploads/630a5ef0e81e1dea2cedcec0/Tot0eFOJF4UQbirJxLv7o.png)
![image/png](https://cdn-uploads.huggingface.co/production/uploads/630a5ef0e81e1dea2cedcec0/vveQQUPFPDcOj25lvfiwg.png)
![image/png](https://cdn-uploads.huggingface.co/production/uploads/630a5ef0e81e1dea2cedcec0/tcwilqHy6-cPiIPrI0NP0.png)
## Responsibility & Safety
We are committed to promoting an open approach to the development of Vietnamese AI, believing that it fosters better and faster innovation. This initiative is designed to bolster the efforts of the Vietnamese AI community.
The Vista model is built for versatility across a broad spectrum of applications. However, it is important to note that it is not tailored to meet every specific developer preference for all conceivable use cases out-of-the-box. Such preferences are inherently diverse and vary significantly across different applications.
## Ethical Considerations and Limitations
The responses from this model are not intended to offend or insult any individual or organization. Therefore, the answers provided should be considered as reference material only, and users should critically assess their accuracy.
The model still has significant limitations in terms of knowledge and practical task performance capabilities.
## Future Work
We are committed to continuous improvement of the model, with specific plans to:
1. Further train the finetuned model on diverse Vision Language tasks to enhance its performance.
2. Improve the factual knowledge of the model, particularly to better adapt to Vietnamese cultural contexts.
3. Investigate the combination of different vision encoders to capture more comprehensive image features.
## Acknowledgement
We express our deep gratitude to various contributors and supporters of our project:
- **[LLaVA]**: Significant portions of the source code and instructions were utilized from the [LLaVA repository](https://github.com/haotian-liu/LLaVA), with modifications to adapt to our model architecture.
- **[Vistral]**: Immense thanks to the Vistral development team for creating an outstanding LLM for Vietnamese, accessible at [Hugging Face - Vistral-7B-Chat](https://huggingface.co/Viet-Mistral/Vistral-7B-Chat).
- **[Siglip]**: Grateful for the innovative multilingual vision encoder developed by the Siglip team, detailed in their [research paper](https://arxiv.org/abs/2303.15343).
- **Sponsors**: Special thanks to [VietAI] and [Google Cloud Research] for their diamond-level sponsorship, providing the computing resources essential for our project.
- **Mentors**: Our heartfelt appreciation goes to our mentors, Anh Duong Nguyen and Thanh Le, for their guidance and support.
## Citation Information
**BibTeX:**
```
@article{ViVLM Vista 2024,
title={Vista},
author={Bui, Hop Van and Ha, Hoang Huy and Tran, Oanh Ngoc and Phan, Phuc Van},
year=2024,
month=June},
url={https://huggingface.co/Vi-VLM/Vista}
``` |
itay-nakash/model_0b8bff813c_sweep_polished-bush-828 | itay-nakash | "2024-06-23T08:34:18Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-23T08:34:18Z" | Entry not found |
itay-nakash/model_0b8bff813c_sweep_crimson-dust-829 | itay-nakash | "2024-06-23T08:34:59Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-23T08:34:59Z" | Entry not found |
itay-nakash/model_0b8bff813c_sweep_rural-pond-826 | itay-nakash | "2024-06-23T08:35:02Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-23T08:35:02Z" | Entry not found |
itay-nakash/model_0b8bff813c_sweep_dark-durian-827 | itay-nakash | "2024-06-23T08:35:02Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-23T08:35:02Z" | Entry not found |
itay-nakash/model_0b8bff813c_sweep_comfy-puddle-830 | itay-nakash | "2024-06-23T08:35:07Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-23T08:35:07Z" | Entry not found |
itay-nakash/model_0b8bff813c_sweep_robust-grass-831 | itay-nakash | "2024-06-23T08:35:18Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-23T08:35:18Z" | Entry not found |
maveriq/test | maveriq | "2024-06-23T08:44:03Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-06-23T08:43:18Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
woweenie/v72-curated2-3e5-bs6ga12-3k-main-5e6cos-7k-half | woweenie | "2024-06-23T08:47:42Z" | 0 | 0 | diffusers | [
"diffusers",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2024-06-23T08:44:58Z" | Entry not found |
channy33/latest_checkpoint | channy33 | "2024-06-23T08:45:57Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-23T08:45:57Z" | Entry not found |
shaadclt/paligemma_vqav2 | shaadclt | "2024-06-23T08:46:59Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-23T08:46:59Z" | Entry not found |
woweenie/v71-sd21-curated2-3e5cos-cd0.02-embeddingperturb1-3k | woweenie | "2024-06-23T08:54:03Z" | 0 | 0 | diffusers | [
"diffusers",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2024-06-23T08:47:44Z" | Entry not found |
namrahrehman/dinov2-base-finetuned-lora-EA-rank8 | namrahrehman | "2024-06-23T12:45:05Z" | 0 | 0 | null | [
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:facebook/dinov2-base",
"license:apache-2.0",
"region:us"
] | null | "2024-06-23T08:51:00Z" | ---
license: apache-2.0
base_model: facebook/dinov2-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: dinov2-base-finetuned-lora-EA-rank8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dinov2-base-finetuned-lora-EA-rank8
This model is a fine-tuned version of [facebook/dinov2-base](https://huggingface.co/facebook/dinov2-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4365
- Accuracy: 0.8233
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 1024
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| No log | 0.7805 | 2 | 0.5030 | 0.8142 |
| No log | 1.9512 | 5 | 0.4567 | 0.8215 |
| No log | 2.7317 | 7 | 0.4511 | 0.8215 |
| 0.4811 | 3.9024 | 10 | 0.4438 | 0.8179 |
| 0.4811 | 4.6829 | 12 | 0.4392 | 0.8215 |
| 0.4811 | 5.8537 | 15 | 0.4379 | 0.8452 |
| 0.4811 | 6.6341 | 17 | 0.4365 | 0.8233 |
### Framework versions
- Transformers 4.41.1
- Pytorch 2.1.2
- Datasets 2.19.1
- Tokenizers 0.19.1
|
Chahatdatascience/config-0 | Chahatdatascience | "2024-06-23T10:28:05Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-06-23T08:53:35Z" | Entry not found |
silveroxides/ComfyUI_wav2lip-models | silveroxides | "2024-06-23T09:02:30Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-23T08:54:03Z" | Entry not found |
vamuchenje/llama | vamuchenje | "2024-06-23T08:54:56Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-23T08:54:56Z" | Entry not found |
Belwen/q-FrozenLake-v1-4x4-noSlippery | Belwen | "2024-06-23T10:14:27Z" | 0 | 0 | null | [
"Taxi-v3-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2024-06-23T08:55:51Z" | ---
tags:
- Taxi-v3-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3-4x4-no_slippery
type: Taxi-v3-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
model = load_from_hub(repo_id="Belwen/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
|
WRDLeo/Test | WRDLeo | "2024-06-23T08:56:08Z" | 0 | 0 | null | [
"license:openrail++",
"region:us"
] | null | "2024-06-23T08:56:08Z" | ---
license: openrail++
---
|
brivangl/vgg_kagn_bn11sa_v4 | brivangl | "2024-06-23T09:02:53Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"dataset:imagenet1k",
"arxiv:2404.19756",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | "2024-06-23T08:57:45Z" | ---
license: mit
datasets:
- imagenet1k
metrics:
- accuracy
---
# VGG-like Kolmogorov-Arnold Convolutional network with Gram polynomials
This model is a Convolutional version of Kolmogorov-Arnold Network with VGG-11 like architecture, pretrained on Imagenet1k dataset. KANs were originally presented in [1, 2]. Gram version of KAN originally presented in [3]. For more details visit our [torch-conv-kan](https://github.com/IvanDrokin/torch-conv-kan) repository on GitHub.
## Model description
The model consists of consecutive 10 Gram ConvKAN Layers with InstanceNorm2d, polynomial degree equal to 5, GlobalAveragePooling and Linear classification head:
1. BottleNeckKAGN Convolution, 32 filters, 3x3
2. Max pooling, 2x2
3. BottleNeckKAGN Convolution, 64 filters, 3x3
4. Max pooling, 2x2
5. BottleNeckKAGN Convolution, 128 filters, 3x3
6. BottleNeckKAGN Convolution, 128 filters, 3x3
7. Max pooling, 2x2
8. BottleNeckKAGN Convolution, 256 filters, 3x3
9. BottleNeckKAGN Convolution, 256 filters, 3x3
10 Max pooling, 2x2
11. BottleNeckKAGN Convolution, 256 filters, 3x3
12. BottleNeckKAGN Convolution, 256 filters, 3x3
13. Max pooling, 2x2
14. BottleNeckKAGN Convolution, 512 filters, 3x3
15. BottleNeckKAGN Convolution, 512 filters, 3x3
16. BottleNeckSelfKAGNtention, 512 filters, 3x3
17. Global Average pooling
18. Output layer, 1000 nodes.
![model image](https://github.com/IvanDrokin/torch-conv-kan/blob/main/assets/vgg_kagn_11_v2.png?raw=true)
## Intended uses & limitations
You can use the raw model for image classification or use it as pretrained model for further finetuning.
### How to use
First, clone the repository:
```
git clone https://github.com/IvanDrokin/torch-conv-kan.git
cd torch-conv-kan
pip install -r requirements.txt
```
Then you can initialize the model and load weights.
```python
import torch
from models import vggkagn
model = vggkagn_bn(
3,
1000,
groups=1,
degree=5,
dropout= 0.05,
l1_decay=0,
width_scale=2,
affine=True,
norm_layer=nn.BatchNorm2d,
expected_feature_shape=(1, 1),
vgg_type='VGG11v4',
last_attention=True,
sa_inner_projection=None
)
model.from_pretrained('brivangl/vgg_kagn_bn11sa_v4')
```
Transforms, used for validation on Imagenet1k:
```python
from torchvision.transforms import v2
transforms_val = v2.Compose([
v2.ToImage(),
v2.Resize(256, antialias=True),
v2.CenterCrop(224),
v2.ToDtype(torch.float32, scale=True),
v2.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
])
```
## Training data
This model trained on Imagenet1k dataset (1281167 images in train set)
## Training procedure
Model was trained during 200 full epochs with AdamW optimizer, with following parameters:
```python
{'learning_rate': 0.0009, 'adam_beta1': 0.9, 'adam_beta2': 0.999, 'adam_weight_decay': 5e-06,
'adam_epsilon': 1e-08, 'lr_warmup_steps': 7500, 'lr_power': 0.3, 'lr_end': 1e-07, 'set_grads_to_none': False}
```
And this augmnetations:
```python
transforms_train = v2.Compose([
v2.ToImage(),
v2.RandomHorizontalFlip(p=0.5),
v2.RandomResizedCrop(224, antialias=True),
v2.RandomChoice([v2.AutoAugment(AutoAugmentPolicy.CIFAR10),
v2.AutoAugment(AutoAugmentPolicy.IMAGENET)
]),
v2.ToDtype(torch.float32, scale=True),
v2.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
])
```
## Evaluation results
On Imagenet1k Validation:
| Accuracy, top1 | Accuracy, top5 | AUC (ovo) | AUC (ovr) |
|:--------------:|:--------------:|:---------:|:---------:|
| 70.684 | 89.462 | 99.624 | 99.624 |
On Imagenet1k Test:
Coming soon
### BibTeX entry and citation info
If you use this project in your research or wish to refer to the baseline results, please use the following BibTeX entry.
```bibtex
@misc{torch-conv-kan,
author = {Ivan Drokin},
title = {Torch Conv KAN},
year = {2024},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/IvanDrokin/torch-conv-kan}}
}
```
## References
- [1] Ziming Liu et al., "KAN: Kolmogorov-Arnold Networks", 2024, arXiv. https://arxiv.org/abs/2404.19756
- [2] https://github.com/KindXiaoming/pykan
- [3] https://github.com/Khochawongwat/GRAMKAN |
SarehH/finetuning-sentiment-model-3000-samples | SarehH | "2024-06-23T08:59:02Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-23T08:59:02Z" | Entry not found |
hchcsuim/batch-size16_FFPP-c40_opencv-1FPS_unaugmentation | hchcsuim | "2024-06-23T09:40:29Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/swin-tiny-patch4-window7-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2024-06-23T09:01:59Z" | ---
license: apache-2.0
base_model: microsoft/swin-tiny-patch4-window7-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: batch-size16_FFPP-c40_opencv-1FPS_unaugmentation
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8468039370969567
- name: Precision
type: precision
value: 0.8510173960212436
- name: Recall
type: recall
value: 0.9749826569545612
- name: F1
type: f1
value: 0.908792089611553
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# batch-size16_FFPP-c40_opencv-1FPS_unaugmentation
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3378
- Accuracy: 0.8468
- Precision: 0.8510
- Recall: 0.9750
- F1: 0.9088
- Roc Auc: 0.8879
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | Roc Auc |
|:-------------:|:------:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|:-------:|
| 0.4067 | 0.9998 | 1381 | 0.3378 | 0.8468 | 0.8510 | 0.9750 | 0.9088 | 0.8879 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.1
- Datasets 2.20.0
- Tokenizers 0.19.1
|
Countigo/detr-finetuned-cppe-5-10k-steps | Countigo | "2024-06-23T09:02:03Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-23T09:02:03Z" | Entry not found |
NatalieCheong/q-FrozenLake-v1-4x4-noSlippery | NatalieCheong | "2024-06-23T09:02:09Z" | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2024-06-23T09:02:06Z" | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="NatalieCheong/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
hchcsuim/batch-size16_Celeb-DF_opencv-1FPS_faces-expand50-aligned_unaugmentation | hchcsuim | "2024-06-23T09:16:00Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/swin-tiny-patch4-window7-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2024-06-23T09:06:50Z" | ---
license: apache-2.0
base_model: microsoft/swin-tiny-patch4-window7-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: batch-size16_Celeb-DF_opencv-1FPS_faces-expand50-aligned_unaugmentation
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9556106534431736
- name: Precision
type: precision
value: 0.9569131832797427
- name: Recall
type: recall
value: 0.9918110836031232
- name: F1
type: f1
value: 0.9740496563332866
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# batch-size16_Celeb-DF_opencv-1FPS_faces-expand50-aligned_unaugmentation
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1258
- Accuracy: 0.9556
- Precision: 0.9569
- Recall: 0.9918
- F1: 0.9740
- Roc Auc: 0.9813
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | Roc Auc |
|:-------------:|:------:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|:-------:|
| 0.2155 | 0.9974 | 195 | 0.1258 | 0.9556 | 0.9569 | 0.9918 | 0.9740 | 0.9813 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.1
- Datasets 2.20.0
- Tokenizers 0.19.1
|
NatalieCheong/q-Taxi-v3 | NatalieCheong | "2024-06-23T09:08:51Z" | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2024-06-23T09:08:49Z" | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.50 +/- 2.75
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="NatalieCheong/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
ikocemayy13938/tosenjordan | ikocemayy13938 | "2024-06-23T09:14:48Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2024-06-23T09:13:37Z" | ---
license: openrail
---
|
willing2024/repo_name | willing2024 | "2024-06-23T09:15:10Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-23T09:15:10Z" | Entry not found |
codingninja/gemma-32k-pa | codingninja | "2024-06-23T13:27:54Z" | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-06-23T09:16:17Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
sarvaritoktam/whisper2 | sarvaritoktam | "2024-06-23T09:18:58Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-23T09:18:58Z" | Entry not found |
iamalexcaspian/VictorCalavera-VictorAndValentino | iamalexcaspian | "2024-06-23T20:30:11Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-23T09:20:42Z" | Entry not found |
Panoramax/classify_fr_road_signs | Panoramax | "2024-06-23T09:45:44Z" | 0 | 0 | null | [
"image-classification",
"dataset:Panoramax/classified_fr_road_signs",
"license:etalab-2.0",
"model-index",
"region:us"
] | image-classification | "2024-06-23T09:21:24Z" | ---
license: etalab-2.0
datasets:
- Panoramax/classified_fr_road_signs
pipeline_tag: image-classification
model-index:
- name: classified_fr_road_signs
results:
- task:
type: image-classification
metrics:
- type: accuracy
value: 0.98717
---
# France road signs classification model
This model is a fine tuned version of YOLOv8 classification model using our classified road signs dataset.
250+ types of road signs are defined in the dataset
![](val_batch1_labels.jpg)
![Normalized confusion matrix](confusion_matrix_normalized.png) |
hudifu316/peft-starcoder-lora-a100 | hudifu316 | "2024-06-23T09:21:32Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-23T09:21:32Z" | Entry not found |
svilupp/onnx-embedders | svilupp | "2024-06-23T09:34:50Z" | 0 | 0 | transformers | [
"transformers",
"en",
"arxiv:1909.10351",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-23T09:21:37Z" | ---
license: apache-2.0
language:
- en
library_name: transformers
---
# TinyBERT_L-4_H-312_v2 ONNX Model
This repository provides an ONNX version of the `TinyBERT_L-4_H-312_v2` model, originally developed by the team at [Huawei Noah's Ark Lab](https://arxiv.org/abs/1909.10351)
and ported to Transformers by [Nils Reimers](https://huggingface.co/nreimers).
The model is a compact version of BERT, designed for efficient inference and reduced memory footprint. The ONNX version includes mean pooling of the last hidden layer for convenient feature extraction.
## Model Overview
TinyBERT is a smaller version of BERT that maintains competitive performance while significantly reducing the number of parameters and computational cost. This makes it ideal for deployment in resource-constrained environments. The model is based on the work presented in the paper ["TinyBERT: Distilling BERT for Natural Language Understanding"](https://arxiv.org/abs/1909.10351).
## License
This model is distributed under the Apache 2.0 License. For more details, please refer to the [license file](https://github.com/huawei-noah/Pretrained-Language-Model/blob/master/TinyBERT/LICENSE) in the original repository.
## Model Details
- **Model:** TinyBERT_L-4_H-312_v2
- **Layers:** 4
- **Hidden Size:** 312
- **Pooling:** Mean pooling of the last hidden layer
- **Format:** ONNX
## Usage
To use this model, you will need to have `onnxruntime` installed. You can install it via pip:
```bash
pip install onnxruntime, transformers
```
Below is a Python code snippet demonstrating how to run inference using this ONNX model:
```python
import onnxruntime as ort
from transformers import AutoTokenizer
model_path="TinyBERT_L-4_H-312_v2-onnx/"
tokenizer = AutoTokenizer.from_pretrained(model_path)
ort_sess = ort.InferenceSession(model_path + "/tinybert_mean_embeddings.onnx")
features = tokenizer(['How many people live in Berlin?','Berlin has a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers.', 'New York City is famous for the Metropolitan Museum of Art.'], padding=True, truncation=True, return_tensors="np")
onnx_inputs = {k: v for k, v in features.items() if k != 'token_type_ids'}
ort_outs = ort_sess.run(None, onnx_inputs)
print(ort_outs)
print("Mean pooled output:", mean_pooled_output)
```
Make sure to replace `'model_path'` with the actual path to your ONNX model file.
## Training Details
For detailed information on the training process of TinyBERT, please refer to the [original paper](https://arxiv.org/abs/1909.10351) by Huawei Noah's Ark Lab.
## Acknowledgements
This model is based on the work by the team at Huawei Noah's Ark Lab and by Nils Reimers. Special thanks to the developers for providing the pre-trained model and making it accessible to the community. |
casque/Swmming_lassons_4_v1 | casque | "2024-06-23T09:25:05Z" | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2024-06-23T09:22:07Z" | ---
license: creativeml-openrail-m
---
|
morturr/flan-t5-base-amazon-text-classification-23-6-test | morturr | "2024-06-23T10:06:07Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text-classification",
"generated_from_trainer",
"base_model:google/flan-t5-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-classification | "2024-06-23T09:23:12Z" | ---
license: apache-2.0
base_model: google/flan-t5-base
tags:
- generated_from_trainer
model-index:
- name: flan-t5-base-amazon-text-classification-23-6-test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-base-amazon-text-classification-23-6-test
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.39.2
- Pytorch 2.3.1+cu121
- Datasets 2.10.1
- Tokenizers 0.15.2
|
sirishgam001/videomae-finetuned-engagenet-subset | sirishgam001 | "2024-06-23T17:41:43Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"videomae",
"video-classification",
"endpoints_compatible",
"region:us"
] | video-classification | "2024-06-23T09:25:19Z" | Entry not found |
hchcsuim/batch-size16_FFPP-raw_opencv-1FPS_faces-expand10-aligned_unaugmentation | hchcsuim | "2024-06-23T10:33:00Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/swin-tiny-patch4-window7-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2024-06-23T09:26:30Z" | ---
license: apache-2.0
base_model: microsoft/swin-tiny-patch4-window7-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: batch-size16_FFPP-raw_opencv-1FPS_faces-expand10-aligned_unaugmentation
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9667701545494866
- name: Precision
type: precision
value: 0.9736804277828887
- name: Recall
type: recall
value: 0.9841488928924275
- name: F1
type: f1
value: 0.9788866730252319
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# batch-size16_FFPP-raw_opencv-1FPS_faces-expand10-aligned_unaugmentation
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0861
- Accuracy: 0.9668
- Precision: 0.9737
- Recall: 0.9841
- F1: 0.9789
- Roc Auc: 0.9937
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | Roc Auc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|:-------:|
| 0.1164 | 1.0 | 1374 | 0.0861 | 0.9668 | 0.9737 | 0.9841 | 0.9789 | 0.9937 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.1
- Datasets 2.20.0
- Tokenizers 0.19.1
|
welsachy/mental-roberta-base-finetuned-depression | welsachy | "2024-06-23T09:28:41Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:mental/mental-roberta-base",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-06-23T09:27:51Z" | ---
license: cc-by-nc-4.0
base_model: mental/mental-roberta-base
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: mental-roberta-base-finetuned-depression
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mental-roberta-base-finetuned-depression
This model is a fine-tuned version of [mental/mental-roberta-base](https://huggingface.co/mental/mental-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6567
- Precision: 0.8863
- Recall: 0.9168
- F1: 0.8996
- Accuracy: 0.9115
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 469 | 0.3852 | 0.7878 | 0.8253 | 0.7958 | 0.8667 |
| 0.5249 | 2.0 | 938 | 0.4720 | 0.8778 | 0.8722 | 0.8662 | 0.8913 |
| 0.2598 | 3.0 | 1407 | 0.5459 | 0.8975 | 0.8791 | 0.8865 | 0.8977 |
| 0.1624 | 4.0 | 1876 | 0.5022 | 0.9004 | 0.8979 | 0.8976 | 0.9072 |
| 0.1036 | 5.0 | 2345 | 0.6257 | 0.8910 | 0.8968 | 0.8931 | 0.9009 |
| 0.0668 | 6.0 | 2814 | 0.6531 | 0.9145 | 0.8927 | 0.9026 | 0.9104 |
| 0.0539 | 7.0 | 3283 | 0.6209 | 0.8552 | 0.9115 | 0.8802 | 0.8945 |
| 0.057 | 8.0 | 3752 | 0.6567 | 0.8863 | 0.9168 | 0.8996 | 0.9115 |
| 0.0523 | 9.0 | 4221 | 0.7184 | 0.9067 | 0.8984 | 0.8993 | 0.9083 |
| 0.0354 | 10.0 | 4690 | 0.7112 | 0.8874 | 0.9014 | 0.8914 | 0.9072 |
| 0.0268 | 11.0 | 5159 | 0.7168 | 0.8996 | 0.9012 | 0.8979 | 0.9083 |
| 0.0297 | 12.0 | 5628 | 0.7499 | 0.8667 | 0.9096 | 0.8847 | 0.9030 |
| 0.0242 | 13.0 | 6097 | 0.7554 | 0.8946 | 0.9014 | 0.8955 | 0.9072 |
| 0.0238 | 14.0 | 6566 | 0.7990 | 0.8909 | 0.9014 | 0.8934 | 0.9072 |
| 0.0178 | 15.0 | 7035 | 0.8298 | 0.8965 | 0.8933 | 0.8925 | 0.9051 |
| 0.0226 | 16.0 | 7504 | 0.8428 | 0.9099 | 0.8890 | 0.8973 | 0.9062 |
| 0.0226 | 17.0 | 7973 | 0.8490 | 0.8742 | 0.8983 | 0.8816 | 0.9041 |
| 0.0183 | 18.0 | 8442 | 0.8148 | 0.8940 | 0.8965 | 0.8930 | 0.9072 |
| 0.0188 | 19.0 | 8911 | 0.8146 | 0.8927 | 0.8960 | 0.8921 | 0.9062 |
| 0.015 | 20.0 | 9380 | 0.8216 | 0.8927 | 0.8960 | 0.8921 | 0.9062 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
hchcsuim/batch-size16_FFPP-raw_opencv-1FPS_faces-expand30-aligned_unaugmentation | hchcsuim | "2024-06-23T10:42:07Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/swin-tiny-patch4-window7-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2024-06-23T09:35:12Z" | ---
license: apache-2.0
base_model: microsoft/swin-tiny-patch4-window7-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: batch-size16_FFPP-raw_opencv-1FPS_faces-expand30-aligned_unaugmentation
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9592617704179547
- name: Precision
type: precision
value: 0.9586435187816105
- name: Recall
type: recall
value: 0.9906916650761257
- name: F1
type: f1
value: 0.9744041473223634
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# batch-size16_FFPP-raw_opencv-1FPS_faces-expand30-aligned_unaugmentation
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1043
- Accuracy: 0.9593
- Precision: 0.9586
- Recall: 0.9907
- F1: 0.9744
- Roc Auc: 0.9935
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | Roc Auc |
|:-------------:|:------:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|:-------:|
| 0.0935 | 0.9994 | 1359 | 0.1043 | 0.9593 | 0.9586 | 0.9907 | 0.9744 | 0.9935 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.1
- Datasets 2.20.0
- Tokenizers 0.19.1
|
hchcsuim/batch-size16_FFPP-raw_opencv-1FPS_faces-expand20-aligned_unaugmentation | hchcsuim | "2024-06-23T10:41:20Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/swin-tiny-patch4-window7-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2024-06-23T09:35:28Z" | ---
license: apache-2.0
base_model: microsoft/swin-tiny-patch4-window7-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: batch-size16_FFPP-raw_opencv-1FPS_faces-expand20-aligned_unaugmentation
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9652089518154829
- name: Precision
type: precision
value: 0.9760606985566651
- name: Recall
type: recall
value: 0.979577546971642
- name: F1
type: f1
value: 0.9778159605681793
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# batch-size16_FFPP-raw_opencv-1FPS_faces-expand20-aligned_unaugmentation
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0892
- Accuracy: 0.9652
- Precision: 0.9761
- Recall: 0.9796
- F1: 0.9778
- Roc Auc: 0.9929
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | Roc Auc |
|:-------------:|:------:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|:-------:|
| 0.1093 | 0.9996 | 1368 | 0.0892 | 0.9652 | 0.9761 | 0.9796 | 0.9778 | 0.9929 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.1
- Datasets 2.20.0
- Tokenizers 0.19.1
|
hcy5561/xlm-roberta-base-finetuned-panx-de-fr | hcy5561 | "2024-06-23T09:55:28Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2024-06-23T09:36:12Z" | ---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1651
- F1: 0.8596
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2793 | 1.0 | 715 | 0.1802 | 0.8181 |
| 0.1481 | 2.0 | 1430 | 0.1574 | 0.8498 |
| 0.0958 | 3.0 | 2145 | 0.1651 | 0.8596 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.2+cu118
- Datasets 2.18.0
- Tokenizers 0.15.2
|
Hia0814/aura | Hia0814 | "2024-06-23T09:40:22Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-23T09:40:22Z" | Entry not found |
hchcsuim/batch-size16_FFPP-raw_opencv-1FPS_faces-expand40-aligned_unaugmentation | hchcsuim | "2024-06-23T10:46:40Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/swin-tiny-patch4-window7-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2024-06-23T09:41:14Z" | ---
license: apache-2.0
base_model: microsoft/swin-tiny-patch4-window7-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: batch-size16_FFPP-raw_opencv-1FPS_faces-expand40-aligned_unaugmentation
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9639797349780308
- name: Precision
type: precision
value: 0.9690304160354271
- name: Recall
type: recall
value: 0.9854673125638861
- name: F1
type: f1
value: 0.9771797489552041
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# batch-size16_FFPP-raw_opencv-1FPS_faces-expand40-aligned_unaugmentation
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0934
- Accuracy: 0.9640
- Precision: 0.9690
- Recall: 0.9855
- F1: 0.9772
- Roc Auc: 0.9930
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | Roc Auc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|:-------:|
| 0.0939 | 1.0 | 1348 | 0.0934 | 0.9640 | 0.9690 | 0.9855 | 0.9772 | 0.9930 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.1
- Datasets 2.20.0
- Tokenizers 0.19.1
|
Anderlaxe/q-FrozenLake-v1-4x4-noSlippery | Anderlaxe | "2024-06-23T09:45:14Z" | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2024-06-23T09:45:11Z" | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Anderlaxe/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
hchcsuim/batch-size16_FFPP-c40_opencv-1FPS_faces-expand50-aligned_unaugmentation | hchcsuim | "2024-06-23T10:12:56Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/swin-tiny-patch4-window7-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2024-06-23T09:45:17Z" | ---
license: apache-2.0
base_model: microsoft/swin-tiny-patch4-window7-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: batch-size16_FFPP-c40_opencv-1FPS_faces-expand50-aligned_unaugmentation
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8765686352166385
- name: Precision
type: precision
value: 0.9057447252976812
- name: Recall
type: recall
value: 0.9401535192332712
- name: F1
type: f1
value: 0.9226284206494446
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# batch-size16_FFPP-c40_opencv-1FPS_faces-expand50-aligned_unaugmentation
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2819
- Accuracy: 0.8766
- Precision: 0.9057
- Recall: 0.9402
- F1: 0.9226
- Roc Auc: 0.9232
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | Roc Auc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|:-------:|
| 0.3591 | 1.0 | 1381 | 0.2819 | 0.8766 | 0.9057 | 0.9402 | 0.9226 | 0.9232 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.1
- Datasets 2.20.0
- Tokenizers 0.19.1
|
hchcsuim/batch-size16_FFPP-c40_opencv-1FPS_faces-expand10-aligned_unaugmentation | hchcsuim | "2024-06-23T10:53:37Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/swin-tiny-patch4-window7-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2024-06-23T09:45:40Z" | ---
license: apache-2.0
base_model: microsoft/swin-tiny-patch4-window7-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: batch-size16_FFPP-c40_opencv-1FPS_faces-expand10-aligned_unaugmentation
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8733210369683048
- name: Precision
type: precision
value: 0.9096394054145708
- name: Recall
type: recall
value: 0.9306127759226333
- name: F1
type: f1
value: 0.9200065738233214
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# batch-size16_FFPP-c40_opencv-1FPS_faces-expand10-aligned_unaugmentation
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2914
- Accuracy: 0.8733
- Precision: 0.9096
- Recall: 0.9306
- F1: 0.9200
- Roc Auc: 0.9185
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | Roc Auc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|:-------:|
| 0.3689 | 1.0 | 1381 | 0.2914 | 0.8733 | 0.9096 | 0.9306 | 0.9200 | 0.9185 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.1
- Datasets 2.20.0
- Tokenizers 0.19.1
|
hchcsuim/batch-size16_FFPP-c40_opencv-1FPS_faces-expand0-aligned_unaugmentation | hchcsuim | "2024-06-23T10:53:51Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/swin-tiny-patch4-window7-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2024-06-23T09:46:37Z" | ---
license: apache-2.0
base_model: microsoft/swin-tiny-patch4-window7-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: batch-size16_FFPP-c40_opencv-1FPS_faces-expand0-aligned_unaugmentation
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8527265114910663
- name: Precision
type: precision
value: 0.9116965751817968
- name: Recall
type: recall
value: 0.898925943593969
- name: F1
type: f1
value: 0.9052662226589511
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# batch-size16_FFPP-c40_opencv-1FPS_faces-expand0-aligned_unaugmentation
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3336
- Accuracy: 0.8527
- Precision: 0.9117
- Recall: 0.8989
- F1: 0.9053
- Roc Auc: 0.8991
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | Roc Auc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|:-------:|
| 0.3834 | 1.0 | 1381 | 0.3336 | 0.8527 | 0.9117 | 0.8989 | 0.9053 | 0.8991 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.1
- Datasets 2.20.0
- Tokenizers 0.19.1
|
Ransaka/singlish_tokenizer_16k | Ransaka | "2024-06-23T09:47:59Z" | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-06-23T09:47:58Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
itay-nakash/model_71dd0b85f5_sweep_pious-aardvark-832 | itay-nakash | "2024-06-23T09:48:34Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-23T09:48:34Z" | Entry not found |
ZhZhPeng/3f_safe_draft2 | ZhZhPeng | "2024-06-23T10:01:27Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-06-23T09:53:57Z" | Entry not found |
avibh/xrg | avibh | "2024-06-23T09:55:02Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-23T09:55:02Z" | Entry not found |
hchcsuim/batch-size16_FFPP-c40_opencv-1FPS_faces-expand20-aligned_unaugmentation | hchcsuim | "2024-06-23T11:07:54Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/swin-tiny-patch4-window7-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2024-06-23T09:55:28Z" | ---
license: apache-2.0
base_model: microsoft/swin-tiny-patch4-window7-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: batch-size16_FFPP-c40_opencv-1FPS_faces-expand20-aligned_unaugmentation
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8725628868545823
- name: Precision
type: precision
value: 0.9128469796552658
- name: Recall
type: recall
value: 0.9255677465053413
- name: F1
type: f1
value: 0.9191633529048652
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# batch-size16_FFPP-c40_opencv-1FPS_faces-expand20-aligned_unaugmentation
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2936
- Accuracy: 0.8726
- Precision: 0.9128
- Recall: 0.9256
- F1: 0.9192
- Roc Auc: 0.9186
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | Roc Auc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|:-------:|
| 0.3713 | 1.0 | 1381 | 0.2936 | 0.8726 | 0.9128 | 0.9256 | 0.9192 | 0.9186 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.1
- Datasets 2.20.0
- Tokenizers 0.19.1
|
hcy5561/xlm-roberta-base-finetuned-panx-fr | hcy5561 | "2024-06-23T10:12:51Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2024-06-23T09:55:47Z" | ---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2778
- F1: 0.8433
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.5718 | 1.0 | 191 | 0.3067 | 0.7756 |
| 0.2656 | 2.0 | 382 | 0.2746 | 0.8213 |
| 0.1796 | 3.0 | 573 | 0.2778 | 0.8433 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.2+cu118
- Datasets 2.18.0
- Tokenizers 0.15.2
|
Mutonix/Vriptor-STLLM | Mutonix | "2024-06-23T10:37:25Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"st_llm_hf",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-06-23T09:56:47Z" | ---
license: apache-2.0
---
|
tugas-ds/house | tugas-ds | "2024-06-23T09:59:01Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-23T09:57:57Z" | Entry not found |
Panoramax/detect_fr_road_signs_subsigns | Panoramax | "2024-06-23T10:09:06Z" | 0 | 0 | null | [
"object-detection",
"dataset:Panoramax/fr_road_sign_subsign",
"license:etalab-2.0",
"region:us"
] | object-detection | "2024-06-23T09:58:29Z" | ---
license: etalab-2.0
datasets:
- Panoramax/fr_road_sign_subsign
pipeline_tag: object-detection
---
# French road signs / subsigns detection model
This models allows to detect the main road signs and sub-signs.
![](val_batch1_labels.jpg)
```
Class Images Instances P R mAP50 mAP50-95
all 473 958 0.978 0.983 0.992 0.897
sign 473 486 0.988 0.981 0.992 0.931
sub-sign 473 472 0.969 0.986 0.992 0.863
```
![](results.png) |
Anderlaxe/q-Taxi-v3 | Anderlaxe | "2024-06-23T09:58:40Z" | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2024-06-23T09:58:37Z" | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Anderlaxe/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
limaatulya/my_awesome_billsum_model_74 | limaatulya | "2024-06-23T10:00:30Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | "2024-06-23T09:58:42Z" | Entry not found |
Belwen/q-Taxi-v3 | Belwen | "2024-06-23T10:18:22Z" | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2024-06-23T10:01:23Z" | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.73
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
model = load_from_hub(repo_id="Belwen/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
|
ugurcelebi/DevOpsGPT-1.2 | ugurcelebi | "2024-06-23T10:02:35Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"base_model:unsloth/qwen2-7b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-23T10:02:07Z" | ---
base_model: unsloth/qwen2-7b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
---
# Uploaded model
- **Developed by:** ugurcelebi
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2-7b-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
limaatulya/my_awesome_billsum_model_76 | limaatulya | "2024-06-23T10:09:55Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | "2024-06-23T10:05:34Z" | ---
license: apache-2.0
base_model: google-t5/t5-small
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: my_awesome_billsum_model_76
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_billsum_model_76
This model is a fine-tuned version of [google-t5/t5-small](https://huggingface.co/google-t5/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4150
- Rouge1: 0.9792
- Rouge2: 0.8868
- Rougel: 0.9405
- Rougelsum: 0.94
- Gen Len: 4.9792
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 12 | 0.3399 | 0.9769 | 0.8778 | 0.9325 | 0.9335 | 5.0 |
| No log | 2.0 | 24 | 0.3413 | 0.9795 | 0.8917 | 0.941 | 0.9417 | 5.0208 |
| No log | 3.0 | 36 | 0.3375 | 0.9769 | 0.8778 | 0.9325 | 0.9335 | 5.0 |
| No log | 4.0 | 48 | 0.3497 | 0.9795 | 0.8917 | 0.941 | 0.9417 | 5.0208 |
| No log | 5.0 | 60 | 0.3593 | 0.9732 | 0.8592 | 0.9226 | 0.9246 | 5.0625 |
| No log | 6.0 | 72 | 0.3610 | 0.9732 | 0.8592 | 0.9226 | 0.9246 | 5.0625 |
| No log | 7.0 | 84 | 0.3733 | 0.9732 | 0.8592 | 0.9226 | 0.9246 | 5.0625 |
| No log | 8.0 | 96 | 0.3603 | 0.9735 | 0.8668 | 0.9236 | 0.9241 | 5.0208 |
| No log | 9.0 | 108 | 0.3482 | 0.9735 | 0.8668 | 0.9236 | 0.9241 | 5.0208 |
| No log | 10.0 | 120 | 0.3502 | 0.9735 | 0.8668 | 0.9236 | 0.9241 | 5.0208 |
| No log | 11.0 | 132 | 0.3529 | 0.9769 | 0.8778 | 0.9325 | 0.9335 | 5.0 |
| No log | 12.0 | 144 | 0.3542 | 0.9735 | 0.8668 | 0.9236 | 0.9241 | 5.0208 |
| No log | 13.0 | 156 | 0.3619 | 0.9765 | 0.8799 | 0.932 | 0.933 | 5.0417 |
| No log | 14.0 | 168 | 0.3750 | 0.9765 | 0.8799 | 0.932 | 0.933 | 5.0417 |
| No log | 15.0 | 180 | 0.3778 | 0.9765 | 0.8799 | 0.932 | 0.933 | 5.0417 |
| No log | 16.0 | 192 | 0.3731 | 0.9765 | 0.8799 | 0.932 | 0.933 | 5.0417 |
| No log | 17.0 | 204 | 0.3651 | 0.9765 | 0.8799 | 0.932 | 0.933 | 5.0417 |
| No log | 18.0 | 216 | 0.3695 | 0.9765 | 0.8799 | 0.932 | 0.933 | 5.0417 |
| No log | 19.0 | 228 | 0.3884 | 0.9765 | 0.8799 | 0.932 | 0.933 | 5.0417 |
| No log | 20.0 | 240 | 0.3913 | 0.9765 | 0.8799 | 0.932 | 0.933 | 5.0417 |
| No log | 21.0 | 252 | 0.3775 | 0.9795 | 0.8917 | 0.941 | 0.9417 | 5.0208 |
| No log | 22.0 | 264 | 0.3539 | 0.9769 | 0.8778 | 0.9325 | 0.9335 | 5.0 |
| No log | 23.0 | 276 | 0.3635 | 0.9769 | 0.8778 | 0.9325 | 0.9335 | 5.0 |
| No log | 24.0 | 288 | 0.3701 | 0.9769 | 0.8778 | 0.9325 | 0.9335 | 5.0 |
| No log | 25.0 | 300 | 0.3684 | 0.9769 | 0.8778 | 0.9325 | 0.9335 | 5.0 |
| No log | 26.0 | 312 | 0.3642 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| No log | 27.0 | 324 | 0.3627 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| No log | 28.0 | 336 | 0.3648 | 0.9769 | 0.8778 | 0.9325 | 0.9335 | 5.0 |
| No log | 29.0 | 348 | 0.3650 | 0.9769 | 0.8778 | 0.9325 | 0.9335 | 5.0 |
| No log | 30.0 | 360 | 0.3776 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| No log | 31.0 | 372 | 0.3823 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| No log | 32.0 | 384 | 0.3647 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| No log | 33.0 | 396 | 0.3687 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| No log | 34.0 | 408 | 0.3808 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| No log | 35.0 | 420 | 0.3876 | 0.9821 | 0.9007 | 0.9479 | 0.9494 | 5.0 |
| No log | 36.0 | 432 | 0.3691 | 0.9821 | 0.9007 | 0.9479 | 0.9494 | 5.0 |
| No log | 37.0 | 444 | 0.3604 | 0.9851 | 0.9236 | 0.9573 | 0.9583 | 4.9792 |
| No log | 38.0 | 456 | 0.3620 | 0.9851 | 0.9236 | 0.9573 | 0.9583 | 4.9792 |
| No log | 39.0 | 468 | 0.3672 | 0.9821 | 0.9111 | 0.9474 | 0.9484 | 5.0 |
| No log | 40.0 | 480 | 0.3753 | 0.9792 | 0.8903 | 0.9395 | 0.9405 | 5.0208 |
| No log | 41.0 | 492 | 0.3718 | 0.9821 | 0.9111 | 0.9474 | 0.9484 | 5.0 |
| 0.0456 | 42.0 | 504 | 0.3747 | 0.9821 | 0.9007 | 0.9479 | 0.9494 | 5.0 |
| 0.0456 | 43.0 | 516 | 0.3900 | 0.9821 | 0.9007 | 0.9479 | 0.9494 | 5.0 |
| 0.0456 | 44.0 | 528 | 0.3961 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| 0.0456 | 45.0 | 540 | 0.3949 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| 0.0456 | 46.0 | 552 | 0.3953 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| 0.0456 | 47.0 | 564 | 0.3953 | 0.9821 | 0.9007 | 0.9479 | 0.9494 | 5.0 |
| 0.0456 | 48.0 | 576 | 0.3891 | 0.9821 | 0.9007 | 0.9479 | 0.9494 | 5.0 |
| 0.0456 | 49.0 | 588 | 0.3811 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| 0.0456 | 50.0 | 600 | 0.3826 | 0.9821 | 0.9007 | 0.9479 | 0.9494 | 5.0 |
| 0.0456 | 51.0 | 612 | 0.3850 | 0.9821 | 0.9007 | 0.9479 | 0.9494 | 5.0 |
| 0.0456 | 52.0 | 624 | 0.3851 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| 0.0456 | 53.0 | 636 | 0.3937 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| 0.0456 | 54.0 | 648 | 0.3990 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| 0.0456 | 55.0 | 660 | 0.4056 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| 0.0456 | 56.0 | 672 | 0.4101 | 0.9792 | 0.8903 | 0.9395 | 0.9405 | 5.0208 |
| 0.0456 | 57.0 | 684 | 0.4103 | 0.9792 | 0.8903 | 0.9395 | 0.9405 | 5.0208 |
| 0.0456 | 58.0 | 696 | 0.4083 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| 0.0456 | 59.0 | 708 | 0.4045 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| 0.0456 | 60.0 | 720 | 0.4109 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| 0.0456 | 61.0 | 732 | 0.4154 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| 0.0456 | 62.0 | 744 | 0.4149 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| 0.0456 | 63.0 | 756 | 0.4133 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| 0.0456 | 64.0 | 768 | 0.4194 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| 0.0456 | 65.0 | 780 | 0.4339 | 0.9762 | 0.8691 | 0.9311 | 0.9311 | 5.0 |
| 0.0456 | 66.0 | 792 | 0.4413 | 0.9762 | 0.8691 | 0.9311 | 0.9311 | 5.0 |
| 0.0456 | 67.0 | 804 | 0.4265 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| 0.0456 | 68.0 | 816 | 0.4261 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| 0.0456 | 69.0 | 828 | 0.4187 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| 0.0456 | 70.0 | 840 | 0.4231 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| 0.0456 | 71.0 | 852 | 0.4243 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| 0.0456 | 72.0 | 864 | 0.4159 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| 0.0456 | 73.0 | 876 | 0.4133 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| 0.0456 | 74.0 | 888 | 0.4130 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| 0.0456 | 75.0 | 900 | 0.4112 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| 0.0456 | 76.0 | 912 | 0.4096 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| 0.0456 | 77.0 | 924 | 0.4079 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| 0.0456 | 78.0 | 936 | 0.4056 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| 0.0456 | 79.0 | 948 | 0.4030 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| 0.0456 | 80.0 | 960 | 0.4078 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| 0.0456 | 81.0 | 972 | 0.4078 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| 0.0456 | 82.0 | 984 | 0.4074 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| 0.0456 | 83.0 | 996 | 0.4099 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| 0.0238 | 84.0 | 1008 | 0.4148 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| 0.0238 | 85.0 | 1020 | 0.4180 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| 0.0238 | 86.0 | 1032 | 0.4147 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| 0.0238 | 87.0 | 1044 | 0.4145 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| 0.0238 | 88.0 | 1056 | 0.4168 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| 0.0238 | 89.0 | 1068 | 0.4171 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| 0.0238 | 90.0 | 1080 | 0.4158 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| 0.0238 | 91.0 | 1092 | 0.4154 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| 0.0238 | 92.0 | 1104 | 0.4155 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| 0.0238 | 93.0 | 1116 | 0.4157 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| 0.0238 | 94.0 | 1128 | 0.4165 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| 0.0238 | 95.0 | 1140 | 0.4165 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| 0.0238 | 96.0 | 1152 | 0.4151 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| 0.0238 | 97.0 | 1164 | 0.4149 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| 0.0238 | 98.0 | 1176 | 0.4149 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| 0.0238 | 99.0 | 1188 | 0.4150 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| 0.0238 | 100.0 | 1200 | 0.4150 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
Nadisan/Nads | Nadisan | "2024-06-23T10:07:36Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-23T10:07:36Z" | Entry not found |
xinoooo/anime | xinoooo | "2024-06-23T10:08:06Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2024-06-23T10:08:06Z" | ---
license: apache-2.0
---
|
limaatulya/my_awesome_billsum_model_78 | limaatulya | "2024-06-23T10:16:12Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | "2024-06-23T10:12:07Z" | ---
license: apache-2.0
base_model: google-t5/t5-small
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: my_awesome_billsum_model_78
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_billsum_model_78
This model is a fine-tuned version of [google-t5/t5-small](https://huggingface.co/google-t5/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5080
- Rouge1: 0.9792
- Rouge2: 0.8868
- Rougel: 0.9405
- Rougelsum: 0.94
- Gen Len: 4.9792
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 12 | 0.4089 | 0.9821 | 0.9104 | 0.9484 | 0.9484 | 4.9583 |
| No log | 2.0 | 24 | 0.4068 | 0.9821 | 0.9104 | 0.9484 | 0.9484 | 4.9583 |
| No log | 3.0 | 36 | 0.4284 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| No log | 4.0 | 48 | 0.4548 | 0.9792 | 0.8903 | 0.9395 | 0.9405 | 5.0208 |
| No log | 5.0 | 60 | 0.4590 | 0.9792 | 0.8903 | 0.9395 | 0.9405 | 5.0208 |
| No log | 6.0 | 72 | 0.4543 | 0.9792 | 0.8903 | 0.9395 | 0.9405 | 5.0208 |
| No log | 7.0 | 84 | 0.4863 | 0.9752 | 0.8708 | 0.9311 | 0.9311 | 5.0417 |
| No log | 8.0 | 96 | 0.4935 | 0.9732 | 0.8569 | 0.9221 | 0.9216 | 5.0208 |
| No log | 9.0 | 108 | 0.4931 | 0.9762 | 0.8691 | 0.9311 | 0.9311 | 5.0 |
| No log | 10.0 | 120 | 0.4817 | 0.9762 | 0.8691 | 0.9311 | 0.9311 | 5.0 |
| No log | 11.0 | 132 | 0.4741 | 0.9762 | 0.8691 | 0.9311 | 0.9311 | 5.0 |
| No log | 12.0 | 144 | 0.4732 | 0.9762 | 0.8691 | 0.9311 | 0.9311 | 5.0 |
| No log | 13.0 | 156 | 0.4742 | 0.9762 | 0.8691 | 0.9311 | 0.9311 | 5.0 |
| No log | 14.0 | 168 | 0.4736 | 0.9792 | 0.8903 | 0.9395 | 0.9405 | 5.0208 |
| No log | 15.0 | 180 | 0.4680 | 0.9792 | 0.8903 | 0.9395 | 0.9405 | 5.0208 |
| No log | 16.0 | 192 | 0.4534 | 0.9821 | 0.9007 | 0.9479 | 0.9494 | 5.0 |
| No log | 17.0 | 204 | 0.4412 | 0.9821 | 0.9007 | 0.9479 | 0.9494 | 5.0 |
| No log | 18.0 | 216 | 0.4341 | 0.9821 | 0.9007 | 0.9479 | 0.9494 | 5.0 |
| No log | 19.0 | 228 | 0.4317 | 0.9821 | 0.9007 | 0.9479 | 0.9494 | 5.0 |
| No log | 20.0 | 240 | 0.4315 | 0.9821 | 0.9007 | 0.9479 | 0.9494 | 5.0 |
| No log | 21.0 | 252 | 0.4313 | 0.9792 | 0.8903 | 0.9395 | 0.9405 | 5.0208 |
| No log | 22.0 | 264 | 0.4277 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| No log | 23.0 | 276 | 0.4376 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| No log | 24.0 | 288 | 0.4432 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| No log | 25.0 | 300 | 0.4450 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| No log | 26.0 | 312 | 0.4468 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| No log | 27.0 | 324 | 0.4415 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| No log | 28.0 | 336 | 0.4560 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| No log | 29.0 | 348 | 0.4713 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| No log | 30.0 | 360 | 0.4732 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| No log | 31.0 | 372 | 0.4726 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| No log | 32.0 | 384 | 0.4682 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| No log | 33.0 | 396 | 0.4647 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| No log | 34.0 | 408 | 0.4644 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| No log | 35.0 | 420 | 0.4657 | 0.9821 | 0.9007 | 0.9479 | 0.9494 | 5.0 |
| No log | 36.0 | 432 | 0.4643 | 0.9821 | 0.9007 | 0.9479 | 0.9494 | 5.0 |
| No log | 37.0 | 444 | 0.4572 | 0.9821 | 0.9007 | 0.9479 | 0.9494 | 5.0 |
| No log | 38.0 | 456 | 0.4447 | 0.9821 | 0.9007 | 0.9479 | 0.9494 | 5.0 |
| No log | 39.0 | 468 | 0.4437 | 0.9821 | 0.9007 | 0.9479 | 0.9494 | 5.0 |
| No log | 40.0 | 480 | 0.4684 | 0.9821 | 0.9007 | 0.9479 | 0.9494 | 5.0 |
| No log | 41.0 | 492 | 0.4722 | 0.9821 | 0.9007 | 0.9479 | 0.9494 | 5.0 |
| 0.0088 | 42.0 | 504 | 0.4716 | 0.9821 | 0.9007 | 0.9479 | 0.9494 | 5.0 |
| 0.0088 | 43.0 | 516 | 0.4803 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| 0.0088 | 44.0 | 528 | 0.4854 | 0.9762 | 0.8691 | 0.9311 | 0.9311 | 5.0 |
| 0.0088 | 45.0 | 540 | 0.4830 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| 0.0088 | 46.0 | 552 | 0.4819 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| 0.0088 | 47.0 | 564 | 0.4812 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| 0.0088 | 48.0 | 576 | 0.4806 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| 0.0088 | 49.0 | 588 | 0.4762 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| 0.0088 | 50.0 | 600 | 0.4737 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| 0.0088 | 51.0 | 612 | 0.4735 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| 0.0088 | 52.0 | 624 | 0.4738 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| 0.0088 | 53.0 | 636 | 0.4736 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| 0.0088 | 54.0 | 648 | 0.4738 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| 0.0088 | 55.0 | 660 | 0.4776 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| 0.0088 | 56.0 | 672 | 0.4866 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| 0.0088 | 57.0 | 684 | 0.4926 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| 0.0088 | 58.0 | 696 | 0.4938 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| 0.0088 | 59.0 | 708 | 0.4902 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| 0.0088 | 60.0 | 720 | 0.4962 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| 0.0088 | 61.0 | 732 | 0.5033 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| 0.0088 | 62.0 | 744 | 0.5043 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| 0.0088 | 63.0 | 756 | 0.5025 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| 0.0088 | 64.0 | 768 | 0.5176 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| 0.0088 | 65.0 | 780 | 0.5708 | 0.9762 | 0.8691 | 0.9311 | 0.9311 | 5.0 |
| 0.0088 | 66.0 | 792 | 0.5707 | 0.9762 | 0.8691 | 0.9311 | 0.9311 | 5.0 |
| 0.0088 | 67.0 | 804 | 0.5278 | 0.9762 | 0.8691 | 0.9311 | 0.9311 | 5.0 |
| 0.0088 | 68.0 | 816 | 0.5179 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| 0.0088 | 69.0 | 828 | 0.5164 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| 0.0088 | 70.0 | 840 | 0.5504 | 0.9762 | 0.8691 | 0.9311 | 0.9311 | 5.0 |
| 0.0088 | 71.0 | 852 | 0.5584 | 0.9762 | 0.8691 | 0.9311 | 0.9311 | 5.0 |
| 0.0088 | 72.0 | 864 | 0.5281 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| 0.0088 | 73.0 | 876 | 0.5198 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| 0.0088 | 74.0 | 888 | 0.5176 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| 0.0088 | 75.0 | 900 | 0.5103 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| 0.0088 | 76.0 | 912 | 0.5068 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| 0.0088 | 77.0 | 924 | 0.5030 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| 0.0088 | 78.0 | 936 | 0.5025 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| 0.0088 | 79.0 | 948 | 0.4968 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| 0.0088 | 80.0 | 960 | 0.5113 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| 0.0088 | 81.0 | 972 | 0.5083 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| 0.0088 | 82.0 | 984 | 0.5031 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| 0.0088 | 83.0 | 996 | 0.5066 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| 0.0059 | 84.0 | 1008 | 0.5177 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| 0.0059 | 85.0 | 1020 | 0.5192 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| 0.0059 | 86.0 | 1032 | 0.5104 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| 0.0059 | 87.0 | 1044 | 0.5085 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| 0.0059 | 88.0 | 1056 | 0.5130 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| 0.0059 | 89.0 | 1068 | 0.5116 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| 0.0059 | 90.0 | 1080 | 0.5081 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| 0.0059 | 91.0 | 1092 | 0.5074 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| 0.0059 | 92.0 | 1104 | 0.5090 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| 0.0059 | 93.0 | 1116 | 0.5097 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| 0.0059 | 94.0 | 1128 | 0.5123 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| 0.0059 | 95.0 | 1140 | 0.5118 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| 0.0059 | 96.0 | 1152 | 0.5089 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| 0.0059 | 97.0 | 1164 | 0.5080 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| 0.0059 | 98.0 | 1176 | 0.5079 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| 0.0059 | 99.0 | 1188 | 0.5076 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
| 0.0059 | 100.0 | 1200 | 0.5080 | 0.9792 | 0.8868 | 0.9405 | 0.94 | 4.9792 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
hcy5561/xlm-roberta-base-finetuned-panx-it | hcy5561 | "2024-06-23T10:33:30Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2024-06-23T10:12:54Z" | ---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-it
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-it
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2606
- F1: 0.8227
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.7316 | 1.0 | 70 | 0.3194 | 0.7475 |
| 0.2917 | 2.0 | 140 | 0.2708 | 0.8006 |
| 0.2007 | 3.0 | 210 | 0.2606 | 0.8227 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.2+cu118
- Datasets 2.18.0
- Tokenizers 0.15.2
|
NoNameFactory/llama-3-8b-4bit-ContdPT_1_10_noEOS_callcenter | NoNameFactory | "2024-06-23T10:18:49Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-23T10:13:09Z" | ---
base_model: unsloth/llama-3-8b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** hsnam95
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
itay-nakash/model_2ec771cb72_sweep_lemon-wave-835 | itay-nakash | "2024-06-23T10:13:46Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-23T10:13:46Z" | Entry not found |
starnet/01-star21-06-23-01 | starnet | "2024-06-23T10:21:49Z" | 0 | 0 | null | [
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | null | "2024-06-23T10:13:58Z" | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
mbayan/test-finetuned | mbayan | "2024-06-23T10:24:24Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-06-23T10:15:00Z" | ---
base_model: unsloth/llama-3-8b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
---
# Uploaded model
- **Developed by:** mbayan
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Khalid0013/Trial | Khalid0013 | "2024-06-23T10:15:31Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-23T10:15:31Z" | Entry not found |
itay-nakash/model_6d5c5a99e5_sweep_ruby-dawn-839 | itay-nakash | "2024-06-23T10:16:17Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-23T10:16:17Z" | Entry not found |
itay-nakash/model_71dd0b85f5_sweep_breezy-pyramid-837 | itay-nakash | "2024-06-23T10:16:18Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-23T10:16:18Z" | Entry not found |
panxinyang/Qwen-Qwen1.5-7B-1719137856 | panxinyang | "2024-06-23T10:17:36Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-23T10:17:36Z" | Entry not found |
Fischerboot/sophie-new-but-not-improved | Fischerboot | "2024-06-23T10:18:54Z" | 0 | 0 | peft | [
"peft",
"llama",
"generated_from_trainer",
"base_model:Fischerboot/LLama3-Lexi-Aura-3Some-SLERP-SLERP-ql-merge",
"4-bit",
"bitsandbytes",
"region:us"
] | null | "2024-06-23T10:18:31Z" | ---
base_model: Fischerboot/LLama3-Lexi-Aura-3Some-SLERP-SLERP-ql-merge
library_name: peft
tags:
- generated_from_trainer
model-index:
- name: outputs/8-rank-1-epoch-new
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
base_model: Fischerboot/LLama3-Lexi-Aura-3Some-SLERP-SLERP-ql-merge
model_type: LlamaForCausalLM
tokenizer_type: AutoTokenizer
load_in_8bit: false
load_in_4bit: true
strict: false
chat_template: llama3
datasets:
- path: Fischerboot/dahset
type: sharegpt
conversation: llama3
dataset_prepared_path: last_run_prepared
val_set_size: 0.1
output_dir: ./outputs/8-rank-1-epoch-new
adapter: qlora
lora_model_dir:
sequence_len: 128
sample_packing: false
pad_to_sequence_len: true
lora_r: 8
lora_alpha: 4
lora_dropout: 0.05
lora_target_linear: true
lora_fan_in_fan_out:
lora_target_modules:
- gate_proj
- down_proj
- up_proj
- q_proj
- v_proj
- k_proj
- o_proj
wandb_project:
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:
gradient_accumulation_steps: 1
micro_batch_size: 1
num_epochs: 1
optimizer: adamw_bnb_8bit
lr_scheduler: cosine
learning_rate: 0.0002
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
loss_watchdog_threshold: 8.0
loss_watchdog_patience: 3
eval_sample_packing: false
warmup_steps: 10
evals_per_epoch: 4
eval_table_size:
eval_max_new_tokens: 128
saves_per_epoch: 1
debug:
deepspeed:
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
bos_token: "<|begin_of_text|>"
eos_token: "<|end_of_text|>"
pad_token: "<|end_of_text|>"
```
</details><br>
# outputs/8-rank-1-epoch-new
This model is a fine-tuned version of [Fischerboot/LLama3-Lexi-Aura-3Some-SLERP-SLERP-ql-merge](https://huggingface.co/Fischerboot/LLama3-Lexi-Aura-3Some-SLERP-SLERP-ql-merge) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4538
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 5.2018 | 0.005 | 1 | 6.2425 |
| 0.6339 | 0.25 | 50 | 1.7478 |
| 2.0693 | 0.5 | 100 | 1.5628 |
| 1.104 | 0.75 | 150 | 1.4632 |
| 1.5272 | 1.0 | 200 | 1.4538 |
### Framework versions
- PEFT 0.11.1
- Transformers 4.41.1
- Pytorch 2.1.2+cu118
- Datasets 2.19.1
- Tokenizers 0.19.1 |
itay-nakash/model_9539ee4e06_sweep_gentle-wind-842 | itay-nakash | "2024-06-23T10:18:42Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-23T10:18:42Z" | Entry not found |
itay-nakash/model_47b4c49ddb_sweep_fresh-totem-843 | itay-nakash | "2024-06-23T10:20:09Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-23T10:20:09Z" | Entry not found |
limaatulya/my_awesome_billsum_model_80 | limaatulya | "2024-06-23T10:26:55Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | "2024-06-23T10:22:48Z" | ---
license: apache-2.0
base_model: google-t5/t5-small
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: my_awesome_billsum_model_80
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_billsum_model_80
This model is a fine-tuned version of [google-t5/t5-small](https://huggingface.co/google-t5/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1923
- Rouge1: 0.9697
- Rouge2: 0.8445
- Rougel: 0.9199
- Rougelsum: 0.9179
- Gen Len: 4.9583
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 12 | 2.0545 | 0.4101 | 0.2839 | 0.3907 | 0.3895 | 16.8125 |
| No log | 2.0 | 24 | 1.4437 | 0.442 | 0.3195 | 0.4261 | 0.4245 | 15.9583 |
| No log | 3.0 | 36 | 0.8267 | 0.5727 | 0.4315 | 0.541 | 0.5416 | 12.8125 |
| No log | 4.0 | 48 | 0.5186 | 0.9583 | 0.8429 | 0.9113 | 0.91 | 5.25 |
| No log | 5.0 | 60 | 0.4535 | 0.9739 | 0.8607 | 0.9276 | 0.9271 | 4.875 |
| No log | 6.0 | 72 | 0.4258 | 0.9769 | 0.8768 | 0.9365 | 0.9365 | 4.8958 |
| No log | 7.0 | 84 | 0.4014 | 0.9798 | 0.8869 | 0.9454 | 0.9464 | 4.9167 |
| No log | 8.0 | 96 | 0.3779 | 0.9798 | 0.8869 | 0.9454 | 0.9464 | 4.9167 |
| No log | 9.0 | 108 | 0.3663 | 0.9769 | 0.8726 | 0.9365 | 0.9375 | 4.9375 |
| No log | 10.0 | 120 | 0.3554 | 0.9687 | 0.8444 | 0.922 | 0.9226 | 5.0 |
| No log | 11.0 | 132 | 0.3461 | 0.9687 | 0.8444 | 0.922 | 0.9226 | 5.0 |
| No log | 12.0 | 144 | 0.3339 | 0.9716 | 0.8569 | 0.9314 | 0.9314 | 4.9792 |
| No log | 13.0 | 156 | 0.3242 | 0.9716 | 0.8569 | 0.9314 | 0.9314 | 4.9792 |
| No log | 14.0 | 168 | 0.3155 | 0.9716 | 0.8569 | 0.9314 | 0.9314 | 4.9792 |
| No log | 15.0 | 180 | 0.3030 | 0.9716 | 0.8569 | 0.9314 | 0.9314 | 4.9792 |
| No log | 16.0 | 192 | 0.2979 | 0.9676 | 0.8361 | 0.9193 | 0.9173 | 5.0 |
| No log | 17.0 | 204 | 0.2957 | 0.9676 | 0.8361 | 0.9193 | 0.9173 | 5.0 |
| No log | 18.0 | 216 | 0.2950 | 0.9676 | 0.8361 | 0.9193 | 0.9173 | 5.0 |
| No log | 19.0 | 228 | 0.2840 | 0.9676 | 0.8361 | 0.9193 | 0.9173 | 5.0 |
| No log | 20.0 | 240 | 0.2778 | 0.9676 | 0.8361 | 0.9193 | 0.9173 | 5.0 |
| No log | 21.0 | 252 | 0.2662 | 0.9676 | 0.8361 | 0.9193 | 0.9173 | 5.0 |
| No log | 22.0 | 264 | 0.2609 | 0.9676 | 0.8361 | 0.9193 | 0.9173 | 5.0 |
| No log | 23.0 | 276 | 0.2587 | 0.9676 | 0.8361 | 0.9193 | 0.9173 | 5.0 |
| No log | 24.0 | 288 | 0.2567 | 0.9676 | 0.8361 | 0.9193 | 0.9173 | 5.0 |
| No log | 25.0 | 300 | 0.2604 | 0.9676 | 0.8361 | 0.9193 | 0.9173 | 5.0 |
| No log | 26.0 | 312 | 0.2540 | 0.9676 | 0.8361 | 0.9193 | 0.9173 | 5.0 |
| No log | 27.0 | 324 | 0.2514 | 0.9676 | 0.8361 | 0.9193 | 0.9173 | 5.0 |
| No log | 28.0 | 336 | 0.2437 | 0.9676 | 0.8361 | 0.9193 | 0.9173 | 5.0 |
| No log | 29.0 | 348 | 0.2370 | 0.9676 | 0.8361 | 0.9193 | 0.9173 | 5.0 |
| No log | 30.0 | 360 | 0.2369 | 0.9676 | 0.8361 | 0.9193 | 0.9173 | 5.0 |
| No log | 31.0 | 372 | 0.2347 | 0.9676 | 0.8361 | 0.9193 | 0.9173 | 5.0 |
| No log | 32.0 | 384 | 0.2329 | 0.9676 | 0.8361 | 0.9193 | 0.9173 | 5.0 |
| No log | 33.0 | 396 | 0.2327 | 0.9676 | 0.8361 | 0.9193 | 0.9173 | 5.0 |
| No log | 34.0 | 408 | 0.2271 | 0.9676 | 0.8361 | 0.9193 | 0.9173 | 5.0 |
| No log | 35.0 | 420 | 0.2231 | 0.9676 | 0.8361 | 0.9193 | 0.9173 | 5.0 |
| No log | 36.0 | 432 | 0.2177 | 0.9676 | 0.8361 | 0.9193 | 0.9173 | 5.0 |
| No log | 37.0 | 444 | 0.2168 | 0.9676 | 0.8361 | 0.9193 | 0.9173 | 5.0 |
| No log | 38.0 | 456 | 0.2154 | 0.971 | 0.8468 | 0.9222 | 0.9202 | 4.9583 |
| No log | 39.0 | 468 | 0.2187 | 0.9676 | 0.8361 | 0.9193 | 0.9173 | 5.0 |
| No log | 40.0 | 480 | 0.2202 | 0.971 | 0.8468 | 0.9222 | 0.9202 | 4.9583 |
| No log | 41.0 | 492 | 0.2164 | 0.971 | 0.8468 | 0.9222 | 0.9202 | 4.9583 |
| 0.4771 | 42.0 | 504 | 0.2160 | 0.971 | 0.8468 | 0.9222 | 0.9202 | 4.9583 |
| 0.4771 | 43.0 | 516 | 0.2179 | 0.971 | 0.8468 | 0.9222 | 0.9202 | 4.9583 |
| 0.4771 | 44.0 | 528 | 0.2182 | 0.971 | 0.8468 | 0.9222 | 0.9202 | 4.9583 |
| 0.4771 | 45.0 | 540 | 0.2206 | 0.9675 | 0.8359 | 0.9136 | 0.9111 | 4.9792 |
| 0.4771 | 46.0 | 552 | 0.2172 | 0.9675 | 0.8359 | 0.9136 | 0.9111 | 4.9792 |
| 0.4771 | 47.0 | 564 | 0.2128 | 0.971 | 0.8468 | 0.9222 | 0.9202 | 4.9583 |
| 0.4771 | 48.0 | 576 | 0.2194 | 0.971 | 0.8468 | 0.9222 | 0.9202 | 4.9583 |
| 0.4771 | 49.0 | 588 | 0.2204 | 0.971 | 0.8468 | 0.9222 | 0.9202 | 4.9583 |
| 0.4771 | 50.0 | 600 | 0.2124 | 0.971 | 0.8468 | 0.9222 | 0.9202 | 4.9583 |
| 0.4771 | 51.0 | 612 | 0.2136 | 0.9675 | 0.8359 | 0.9136 | 0.9111 | 4.9792 |
| 0.4771 | 52.0 | 624 | 0.2119 | 0.9675 | 0.8359 | 0.9136 | 0.9111 | 4.9792 |
| 0.4771 | 53.0 | 636 | 0.2085 | 0.9675 | 0.8359 | 0.9136 | 0.9111 | 4.9792 |
| 0.4771 | 54.0 | 648 | 0.2115 | 0.9675 | 0.8359 | 0.9136 | 0.9111 | 4.9792 |
| 0.4771 | 55.0 | 660 | 0.2133 | 0.9675 | 0.8359 | 0.9136 | 0.9111 | 4.9792 |
| 0.4771 | 56.0 | 672 | 0.2087 | 0.9675 | 0.8359 | 0.9136 | 0.9111 | 4.9792 |
| 0.4771 | 57.0 | 684 | 0.2057 | 0.9675 | 0.8359 | 0.9136 | 0.9111 | 4.9792 |
| 0.4771 | 58.0 | 696 | 0.2095 | 0.9697 | 0.8445 | 0.9199 | 0.9179 | 4.9583 |
| 0.4771 | 59.0 | 708 | 0.2105 | 0.9675 | 0.8359 | 0.9136 | 0.9111 | 4.9792 |
| 0.4771 | 60.0 | 720 | 0.2123 | 0.9675 | 0.8359 | 0.9136 | 0.9111 | 4.9792 |
| 0.4771 | 61.0 | 732 | 0.2120 | 0.9675 | 0.8359 | 0.9136 | 0.9111 | 4.9792 |
| 0.4771 | 62.0 | 744 | 0.2132 | 0.9675 | 0.8359 | 0.9136 | 0.9111 | 4.9792 |
| 0.4771 | 63.0 | 756 | 0.2117 | 0.9675 | 0.8359 | 0.9136 | 0.9111 | 4.9792 |
| 0.4771 | 64.0 | 768 | 0.2068 | 0.9675 | 0.8359 | 0.9136 | 0.9111 | 4.9792 |
| 0.4771 | 65.0 | 780 | 0.2049 | 0.9675 | 0.8359 | 0.9136 | 0.9111 | 4.9792 |
| 0.4771 | 66.0 | 792 | 0.2054 | 0.9675 | 0.8359 | 0.9136 | 0.9111 | 4.9792 |
| 0.4771 | 67.0 | 804 | 0.2029 | 0.9675 | 0.8359 | 0.9136 | 0.9111 | 4.9792 |
| 0.4771 | 68.0 | 816 | 0.1995 | 0.9675 | 0.8359 | 0.9136 | 0.9111 | 4.9792 |
| 0.4771 | 69.0 | 828 | 0.1946 | 0.9675 | 0.8359 | 0.9136 | 0.9111 | 4.9792 |
| 0.4771 | 70.0 | 840 | 0.1975 | 0.9675 | 0.8359 | 0.9136 | 0.9111 | 4.9792 |
| 0.4771 | 71.0 | 852 | 0.1995 | 0.9675 | 0.8359 | 0.9136 | 0.9111 | 4.9792 |
| 0.4771 | 72.0 | 864 | 0.2009 | 0.9675 | 0.8359 | 0.9136 | 0.9111 | 4.9792 |
| 0.4771 | 73.0 | 876 | 0.2050 | 0.9675 | 0.8359 | 0.9136 | 0.9111 | 4.9792 |
| 0.4771 | 74.0 | 888 | 0.2039 | 0.9675 | 0.8359 | 0.9136 | 0.9111 | 4.9792 |
| 0.4771 | 75.0 | 900 | 0.2040 | 0.9675 | 0.8359 | 0.9136 | 0.9111 | 4.9792 |
| 0.4771 | 76.0 | 912 | 0.2020 | 0.9675 | 0.8359 | 0.9136 | 0.9111 | 4.9792 |
| 0.4771 | 77.0 | 924 | 0.2003 | 0.9675 | 0.8359 | 0.9136 | 0.9111 | 4.9792 |
| 0.4771 | 78.0 | 936 | 0.1992 | 0.9675 | 0.8359 | 0.9136 | 0.9111 | 4.9792 |
| 0.4771 | 79.0 | 948 | 0.1984 | 0.9675 | 0.8359 | 0.9136 | 0.9111 | 4.9792 |
| 0.4771 | 80.0 | 960 | 0.1971 | 0.9675 | 0.8359 | 0.9136 | 0.9111 | 4.9792 |
| 0.4771 | 81.0 | 972 | 0.1995 | 0.9675 | 0.8359 | 0.9136 | 0.9111 | 4.9792 |
| 0.4771 | 82.0 | 984 | 0.2007 | 0.9697 | 0.8445 | 0.9199 | 0.9179 | 4.9583 |
| 0.4771 | 83.0 | 996 | 0.2020 | 0.9697 | 0.8445 | 0.9199 | 0.9179 | 4.9583 |
| 0.113 | 84.0 | 1008 | 0.2007 | 0.9697 | 0.8445 | 0.9199 | 0.9179 | 4.9583 |
| 0.113 | 85.0 | 1020 | 0.1967 | 0.9697 | 0.8445 | 0.9199 | 0.9179 | 4.9583 |
| 0.113 | 86.0 | 1032 | 0.1975 | 0.9697 | 0.8445 | 0.9199 | 0.9179 | 4.9583 |
| 0.113 | 87.0 | 1044 | 0.1967 | 0.9697 | 0.8445 | 0.9199 | 0.9179 | 4.9583 |
| 0.113 | 88.0 | 1056 | 0.1947 | 0.9697 | 0.8445 | 0.9199 | 0.9179 | 4.9583 |
| 0.113 | 89.0 | 1068 | 0.1925 | 0.9697 | 0.8445 | 0.9199 | 0.9179 | 4.9583 |
| 0.113 | 90.0 | 1080 | 0.1926 | 0.9697 | 0.8445 | 0.9199 | 0.9179 | 4.9583 |
| 0.113 | 91.0 | 1092 | 0.1937 | 0.9697 | 0.8445 | 0.9199 | 0.9179 | 4.9583 |
| 0.113 | 92.0 | 1104 | 0.1934 | 0.9697 | 0.8445 | 0.9199 | 0.9179 | 4.9583 |
| 0.113 | 93.0 | 1116 | 0.1929 | 0.9697 | 0.8445 | 0.9199 | 0.9179 | 4.9583 |
| 0.113 | 94.0 | 1128 | 0.1929 | 0.9697 | 0.8445 | 0.9199 | 0.9179 | 4.9583 |
| 0.113 | 95.0 | 1140 | 0.1928 | 0.9697 | 0.8445 | 0.9199 | 0.9179 | 4.9583 |
| 0.113 | 96.0 | 1152 | 0.1927 | 0.9697 | 0.8445 | 0.9199 | 0.9179 | 4.9583 |
| 0.113 | 97.0 | 1164 | 0.1927 | 0.9697 | 0.8445 | 0.9199 | 0.9179 | 4.9583 |
| 0.113 | 98.0 | 1176 | 0.1925 | 0.9697 | 0.8445 | 0.9199 | 0.9179 | 4.9583 |
| 0.113 | 99.0 | 1188 | 0.1925 | 0.9697 | 0.8445 | 0.9199 | 0.9179 | 4.9583 |
| 0.113 | 100.0 | 1200 | 0.1923 | 0.9697 | 0.8445 | 0.9199 | 0.9179 | 4.9583 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
inflaton/Qwen2-7B-Instruct-MAC-lora | inflaton | "2024-06-23T10:28:42Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/qwen2-7b-instruct-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-06-23T10:25:48Z" | ---
base_model: unsloth/qwen2-7b-instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- sft
---
# Uploaded model
- **Developed by:** inflaton
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2-7b-instruct-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
CHE-72-ZLab/Alibaba-Qwen1.5-4B-Chat-GGUF | CHE-72-ZLab | "2024-06-23T10:27:32Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-23T10:27:32Z" | Entry not found |
CHE-72-ZLab/Alibaba-Qwen1.5-7B-Chat-GGUF | CHE-72-ZLab | "2024-06-23T10:28:18Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-23T10:28:17Z" | Entry not found |
CHE-72-ZLab/Alibaba-Qwen1.5-14B-Chat-GGUF | CHE-72-ZLab | "2024-06-23T10:28:38Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-23T10:28:37Z" | Entry not found |