modelId
stringlengths 5
122
| author
stringlengths 2
42
| last_modified
unknown | downloads
int64 0
679M
| likes
int64 0
11k
| library_name
stringclasses 256
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 48
values | createdAt
unknown | card
stringlengths 1
901k
|
---|---|---|---|---|---|---|---|---|---|
arham061/distilhubert-finetuned-RHD_Dataset | arham061 | "2024-02-19T20:27:03Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:audiofolder",
"base_model:ntu-spml/distilhubert",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | audio-classification | "2023-12-13T16:43:08Z" | ---
license: apache-2.0
base_model: ntu-spml/distilhubert
tags:
- generated_from_trainer
datasets:
- audiofolder
metrics:
- accuracy
model-index:
- name: distilhubert-finetuned-RHD_Dataset
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: audiofolder
type: audiofolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8048780487804879
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-RHD_Dataset
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the audiofolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9447
- Accuracy: 0.8049
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0412 | 1.0 | 46 | 1.0084 | 0.6829 |
| 0.8547 | 2.0 | 92 | 0.8433 | 0.6585 |
| 0.7936 | 3.0 | 138 | 0.7128 | 0.7073 |
| 0.5984 | 4.0 | 184 | 0.7778 | 0.7317 |
| 0.3888 | 5.0 | 230 | 0.6361 | 0.7317 |
| 0.4947 | 6.0 | 276 | 0.7471 | 0.7805 |
| 0.1663 | 7.0 | 322 | 0.8244 | 0.7561 |
| 0.1379 | 8.0 | 368 | 0.7986 | 0.8049 |
| 0.0405 | 9.0 | 414 | 0.8892 | 0.8049 |
| 0.0229 | 10.0 | 460 | 0.9447 | 0.8049 |
### Framework versions
- Transformers 4.36.0
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
AnveshakR/Reddit-NFL-FineTuned-Model | AnveshakR | "2023-12-13T16:55:20Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-12-13T16:44:03Z" | Entry not found |
MidPrepAdobe/test_1 | MidPrepAdobe | "2023-12-13T16:56:38Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2023-12-13T16:44:47Z" | ---
license: apache-2.0
---
|
vishwa27/flan-t5-large-mawpnli-calcx-nli-pt | vishwa27 | "2023-12-13T18:07:38Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/flan-t5-large",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2023-12-13T16:45:35Z" | ---
license: apache-2.0
base_model: google/flan-t5-large
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: flan-t5-large-mawpnli-calcx-nli-pt
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-large-mawpnli-calcx-nli-pt
This model is a fine-tuned version of [google/flan-t5-large](https://huggingface.co/google/flan-t5-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1217
- Rouge1: 95.7098
- Rouge2: 89.9271
- Rougel: 95.5836
- Rougelsum: 95.5842
- Gen Len: 10.9151
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 0.2279 | 1.0 | 819 | 0.1290 | 95.075 | 87.8764 | 94.7902 | 94.8057 | 10.7978 |
| 0.0612 | 2.0 | 1638 | 0.1012 | 95.6219 | 89.6809 | 95.4399 | 95.4521 | 10.9029 |
| 0.0418 | 3.0 | 2457 | 0.0972 | 95.7709 | 90.1703 | 95.613 | 95.637 | 10.9328 |
| 0.0272 | 4.0 | 3276 | 0.1174 | 95.7478 | 90.1332 | 95.5931 | 95.6069 | 10.9395 |
| 0.0215 | 5.0 | 4095 | 0.1217 | 95.7098 | 89.9271 | 95.5836 | 95.5842 | 10.9151 |
### Framework versions
- Transformers 4.35.2
- Pytorch 1.12.1+cu113
- Datasets 2.15.0
- Tokenizers 0.15.0
|
aaalby/asean | aaalby | "2023-12-13T16:46:58Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2023-12-13T16:45:42Z" | ---
license: openrail
---
|
dvshah13/q-FrozenLake-v1-4x4-noSlippery | dvshah13 | "2023-12-13T16:47:24Z" | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2023-12-13T16:47:22Z" | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="dvshah13/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
bodam/sd-model-finetuned-dreambooth-lora | bodam | "2023-12-13T16:47:36Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-12-13T16:47:36Z" | Entry not found |
DJ7/DJ14 | DJ7 | "2023-12-13T16:49:27Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-12-13T16:49:27Z" | Entry not found |
hkivancoral/smids_5x_deit_tiny_adamax_001_fold1 | hkivancoral | "2023-12-17T02:43:13Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-small-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2023-12-13T16:51:13Z" | ---
license: apache-2.0
base_model: facebook/deit-small-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: smids_5x_deit_tiny_adamax_001_fold1
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8898163606010017
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_5x_deit_tiny_adamax_001_fold1
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9489
- Accuracy: 0.8898
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.3549 | 1.0 | 376 | 0.4844 | 0.8264 |
| 0.2678 | 2.0 | 752 | 0.3259 | 0.8798 |
| 0.3098 | 3.0 | 1128 | 0.3469 | 0.8548 |
| 0.2057 | 4.0 | 1504 | 0.3089 | 0.8831 |
| 0.15 | 5.0 | 1880 | 0.4280 | 0.8748 |
| 0.0947 | 6.0 | 2256 | 0.5773 | 0.8581 |
| 0.1544 | 7.0 | 2632 | 0.3805 | 0.8881 |
| 0.1085 | 8.0 | 3008 | 0.4878 | 0.8731 |
| 0.0399 | 9.0 | 3384 | 0.4495 | 0.8965 |
| 0.0251 | 10.0 | 3760 | 0.5573 | 0.8681 |
| 0.0684 | 11.0 | 4136 | 0.4467 | 0.8648 |
| 0.0506 | 12.0 | 4512 | 0.5126 | 0.8982 |
| 0.0075 | 13.0 | 4888 | 0.8575 | 0.8715 |
| 0.0481 | 14.0 | 5264 | 0.7463 | 0.8664 |
| 0.0077 | 15.0 | 5640 | 0.6816 | 0.8865 |
| 0.0098 | 16.0 | 6016 | 0.6312 | 0.8831 |
| 0.0003 | 17.0 | 6392 | 0.7022 | 0.8965 |
| 0.0075 | 18.0 | 6768 | 0.6976 | 0.8731 |
| 0.0042 | 19.0 | 7144 | 0.6012 | 0.8881 |
| 0.0311 | 20.0 | 7520 | 0.7693 | 0.8932 |
| 0.003 | 21.0 | 7896 | 0.6254 | 0.8915 |
| 0.0101 | 22.0 | 8272 | 0.6004 | 0.8998 |
| 0.0209 | 23.0 | 8648 | 0.7643 | 0.8815 |
| 0.0001 | 24.0 | 9024 | 0.8262 | 0.8848 |
| 0.0007 | 25.0 | 9400 | 0.6944 | 0.8898 |
| 0.0034 | 26.0 | 9776 | 0.7140 | 0.8915 |
| 0.0071 | 27.0 | 10152 | 0.8088 | 0.8798 |
| 0.0001 | 28.0 | 10528 | 0.7766 | 0.9032 |
| 0.0039 | 29.0 | 10904 | 0.8084 | 0.8948 |
| 0.0045 | 30.0 | 11280 | 0.7741 | 0.8831 |
| 0.0006 | 31.0 | 11656 | 0.8264 | 0.8932 |
| 0.0 | 32.0 | 12032 | 0.8432 | 0.8865 |
| 0.0 | 33.0 | 12408 | 0.8641 | 0.8848 |
| 0.0 | 34.0 | 12784 | 0.8447 | 0.8865 |
| 0.0 | 35.0 | 13160 | 0.8402 | 0.8848 |
| 0.0 | 36.0 | 13536 | 0.8232 | 0.8948 |
| 0.0 | 37.0 | 13912 | 0.8382 | 0.8915 |
| 0.0 | 38.0 | 14288 | 0.8652 | 0.8898 |
| 0.0 | 39.0 | 14664 | 0.8733 | 0.8848 |
| 0.0 | 40.0 | 15040 | 0.8254 | 0.8881 |
| 0.0 | 41.0 | 15416 | 0.8627 | 0.8848 |
| 0.0 | 42.0 | 15792 | 0.8799 | 0.8881 |
| 0.0 | 43.0 | 16168 | 0.8887 | 0.8915 |
| 0.0 | 44.0 | 16544 | 0.9046 | 0.8932 |
| 0.0 | 45.0 | 16920 | 0.9092 | 0.8932 |
| 0.0031 | 46.0 | 17296 | 0.9143 | 0.8881 |
| 0.0 | 47.0 | 17672 | 0.9293 | 0.8915 |
| 0.0 | 48.0 | 18048 | 0.9378 | 0.8898 |
| 0.0 | 49.0 | 18424 | 0.9447 | 0.8898 |
| 0.0023 | 50.0 | 18800 | 0.9489 | 0.8898 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.1+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
idontgoddamn/AsagiMutsuki | idontgoddamn | "2023-12-13T16:51:45Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-12-13T16:51:28Z" | Entry not found |
Santiclibrain/mixtral_no_robots | Santiclibrain | "2023-12-13T17:08:46Z" | 0 | 0 | null | [
"safetensors",
"region:us"
] | null | "2023-12-13T16:54:32Z" | Entry not found |
Arlech/GameTL | Arlech | "2023-12-13T16:56:07Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-12-13T16:54:56Z" | Entry not found |
nhihlle/whisper-small-vietnamese | nhihlle | "2023-12-13T21:16:00Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"vi",
"base_model:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2023-12-13T16:58:39Z" | ---
language:
- vi
license: apache-2.0
base_model: openai/whisper-small
tags:
- hf-asr-leaderboard
- generated_from_trainer
metrics:
- wer
model-index:
- name: Whisper Small Vietnamese - Nhi Le
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Vietnamese - Nhi Le
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Vietnamese ASR Custom Corpus dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3541
- Wer: 56.0841
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 4.6449 | 0.02 | 2 | 4.3170 | 30.7461 |
| 3.4276 | 0.04 | 4 | 2.9799 | 32.8493 |
| 2.7302 | 0.07 | 6 | 2.6128 | 30.0451 |
| 2.0397 | 0.09 | 8 | 2.4305 | 33.7506 |
| 2.0823 | 0.11 | 10 | 2.3541 | 56.0841 |
### Framework versions
- Transformers 4.36.0
- Pytorch 2.1.1+cpu
- Datasets 2.15.0
- Tokenizers 0.15.0
|
showrounak/moviesong | showrounak | "2023-12-13T16:59:07Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-12-13T16:58:54Z" | Entry not found |
aaptknews/BHARAT-AI | aaptknews | "2023-12-13T17:11:08Z" | 0 | 0 | transformers | [
"transformers",
"code",
"text-generation",
"en",
"hi",
"bh",
"dataset:fka/awesome-chatgpt-prompts",
"dataset:wikimedia/wikipedia",
"dataset:Lin-Chen/ShareGPT4V",
"license:gpl-3.0",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-12-13T16:59:33Z" | ---
license: gpl-3.0
library_name: transformers
pipeline_tag: text-generation
datasets:
- fka/awesome-chatgpt-prompts
- wikimedia/wikipedia
- Lin-Chen/ShareGPT4V
language:
- en
- hi
- bh
metrics:
- accuracy
tags:
- code
---
pip install transformers |
dvshah13/taxi-v3 | dvshah13 | "2023-12-13T16:59:51Z" | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2023-12-13T16:59:49Z" | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.73
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="dvshah13/taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
LoneStriker/bagel-dpo-7b-v0.1-3.0bpw-h6-exl2-2 | LoneStriker | "2023-12-13T17:05:19Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-12-13T17:03:24Z" | ---
license: apache-2.0
---
# A bagel, with everything (including DPO)
![bagel](bagel.png)
## Overview
This is the DPO'd version of https://huggingface.co/jondurbin/bagel-7b-v0.1
If you are getting too many AALLM or other refusals, even with explicitly human system prompts, you may want to try the non-DPO version.
## Benchmarks
I ran these against the latest main branch of lm-evaluation-harness (and opencompass/FastChat for agieval and mt-bench), since batch size/etc effects score for some benchmarks.
| model | arc_challenge | boolq | gsm8k | hellaswag | mmlu | openbookqa | piqa | truthful_qa | winogrande |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| bagel | __0.6715__ | 0.8813 | __0.5618__ | 0.8397 | __0.6408__ | __0.51__ | __0.8406__ | __0.6275__ | __0.7561__ |
| openhermes-2.5 | 0.6476 | __0.8835__ | 0.4852 | __0.8414__ | 0.6347 | 0.498 | 0.8400 | 0.5295 | 0.7443 |
MT-Bench:
```
########## First turn ##########
score
model turn
bagel-7b-v0.1 1 7.60625
########## Second turn ##########
score
model turn
bagel-7b-v0.1 2 7.00625
########## Average ##########
score
model
bagel-7b-v0.1 7.30625
```
## Data selection.
The first step in the process is creating a dataset.
In this case, we're actually creating a composite dataset, consisting of both supervised fine-tuning data (SFT) and direct preference optimization (DPO) data.
All instruction data, that is, data that is not plain text (like project Gutenberg and items from Cinematika) or DPO, is converted into ShareGPT format so it's easier to work with.
See the corresponding code in `bagel/data_sources/*.py` for full implementation for each data source.
Deduplication is done by creating a uuid v5 of the instruction/text, then only adding items not previously seen (where datasets are loaded in order of the confidence score I assign them).
This means that if an instruction is in data source "Foo" with confidence 4 as well as in data source "Bar" with confidence score 2, only the entry from "Foo" will be taken.
### SFT data sources
*Yes, you will see benchmark names in the list, but this only uses the train splits, and a decontamination by cosine similarity is performed at the end as a sanity check*
- [ai2_arc](https://huggingface.co/datasets/ai2_arc)
- Abstraction and reasoning dataset, useful in measuring "intelligence" to a certain extent.
- [airoboros](https://huggingface.co/datasets/unalignment/spicy-3.1)
- Variety of categories of synthetic instructions generated by gpt-4.
- [apps](https://huggingface.co/datasets/codeparrot/apps)
- Python coding dataset with 10k problems.
- [belebele](https://huggingface.co/datasets/facebook/belebele)
- Multi-lingual reading comprehension dataset.
- [boolq](https://huggingface.co/datasets/boolq)
- Corpus of yes/no questions (which can be surprisingly difficult for AI to answer apparently?)
- [cinematika](https://huggingface.co/datasets/jondurbin/cinematika-v0.1) (instruction and plain text)
- RP-style data synthesized from movie scripts so the model isn't quite as boring as it otherwise would be.
- [drop](https://huggingface.co/datasets/drop)
- More reading comprehension.
- [gutenberg](https://www.gutenberg.org/) (plain text)
- Books/plain text, again to make the model less boring, only a handful of examples supported by [chapterize](https://github.com/JonathanReeve/chapterize)
- [lmsys_chat_1m](https://huggingface.co/datasets/lmsys/lmsys-chat-1m) (only gpt-4 items, also used for DPO)
- Chats collected by the lmsys chat arena, containing a wide variety of chats with various models.
- [mathinstruct](https://huggingface.co/datasets/TIGER-Lab/MathInstruct)
- Composite dataset with a variety of math-related tasks and problem/question formats.
- [mmlu](https://huggingface.co/datasets/cais/mmlu)
- Massive Multitask Language Understanding - a wide variety of questions about various subject matters.
- [natural_instructions](https://huggingface.co/datasets/Muennighoff/natural-instructions)
- Millions of instructions from 1600+ task categories (sampled down substantially, stratified by task type)
- [openbookqa](https://huggingface.co/datasets/openbookqa)
- Question answering dataset.
- [piqa](https://huggingface.co/datasets/piqa)
- Phyiscal interaction question answering.
- [python_alpaca](https://huggingface.co/datasets/Vezora/Tested-22k-Python-Alpaca)
- Python instruction response pairs, validated as functional.
- [rosetta_code](https://huggingface.co/datasets/cakiki/rosetta-code)
- Code problems and solutions in a variety of programming languages taken from rosettacode.org.
- [slimorca](https://huggingface.co/datasets/Open-Orca/SlimOrca)
- Collection of ~500k gpt-4 verified chats from OpenOrca.
- [spider](https://huggingface.co/datasets/spider)
- SQL-targeted dataset.
- [squad_v2](https://huggingface.co/datasets/squad_v2)
- Contextual question answering (RAG).
- [synthia](https://huggingface.co/datasets/migtissera/Synthia-v1.3)
- GPT-4 generated data using advanced prompting from Migel Tissera.
- [winogrande](https://huggingface.co/datasets/winogrande)
- Fill in the blank style prompts.
### DPO data sources
- [airoboros 3.1](https://huggingface.co/datasets/unalignment/spicy-3.1) vs [airoboros 2.2.1](https://huggingface.co/datasets/jondurbin/airoboros-gpt4-1.4.1)
- The creative/writing tasks from airoboros-2.2.1 were re-generated using gpt4-0314 and a custom prompt to get longer, more creative, less clichè responses for airoboros 3.1, so we can use the shorter/boring version as the "rejected" value and the rerolled response as "chosen"
- [helpsteer](https://huggingface.co/datasets/nvidia/HelpSteer)
- Really neat dataset provided by the folks at NVidia with human annotation across a variety of metrics. Only items with the highest "correctness" value were used for DPO here, with the highest scoring output as "chosen" and random lower scoring value as "rejected"
- [orca_dpo_pairs](https://huggingface.co/datasets/Intel/orca_dpo_pairs)
- Another interesting dataset by Intel, which provides various DPO pairs generated from prompts included in the SlimOrca dataset.
- [toxic-dpo](https://huggingface.co/datasets/unalignment/toxic-dpo-v0.1)
- __*highly toxic and potentially illegal content!*__ De-censorship, for academic and lawful purposes only, of course. Generated by llama-2-70b via prompt engineering.
- [truthy](https://huggingface.co/datasets/jondurbin/truthy-dpo-v0.1)
- DPO pairs meant to increase truthfulness of the model, e.g. common misconceptions, differentiate between AI assistants and roleplayed human in terms of corporeal awareness/locality/etc.
- [ultrafeedback](https://huggingface.co/datasets/allenai/ultrafeedback_binarized_cleaned)
- One of the bits of magic behind the Zephyr model. Only the items with a chosen score of 8 or higher were included.
Only the train splits were used (if a split was provided), and an additional pass of decontamination is performed using approximate nearest neighbor search (via faiss).
### Total dataset size
The deduplicated and decontamined list of instructions contains 1,671,822 items:
- 1,602,217 SFT/instructions
- 59,247 DPO pairs
- 1606 with both SFT and DPO data
Keep in mind, this number becomes 4x larger when applying the various prompt formats.
## Prompt formatting
In sticking with the theme of the bagel, I didn't want to use a single prompt format, so I used 4 - vicuna, llama-2, alpaca, and chat-ml (sorta).
I also didn't want to randomly select a single prompt format for each item (hoping each instruction would generalize more when used in a variety of prompt formats), so each instruction is actually converted into every prompt format.
This means each epoch of our fine-tune is really basically 4 epochs. So, for the fine-tunes, I would recommend only doing 1 epoch (or 0.75 epochs). I am testing with a single epoch using a relatively low learning rate.
### Alpaca (sort of)
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:
```
The main difference here is that because of the dataset formatting and variety of data sources, it would have been much to tedious to add an `### Input:` block, so the inputs are just in the instruction section.
### Vicuna
```
{system prompt, if provided, randomly defaulting to "A chat between a user and an unbiased, uncensored assistant."}
USER: {instruction}
ASSISTANT:
```
### ChatML (sort of)
I don't really understand the point of having special tokens for `<|im_start|>` and `<|im_end|>`, because in practice they just act as BOS and EOS tokens (but, please correct me if I'm wrong).
So, instead of:
```text
{bos}<|im_start|>{role}
{text}
<|im_end|>{eos}
```
I just changed it to:
```text
{bos}{role}
{text}
{eos}
```
In practice, this would mean tokenization code like such:
```python
tokenizer = AutoTokenizer.from_pretrained('mistralai/mistral-7b-v0.1')
input_str = f"""system
You are a goat.
{tokenizer.eos_token}
{tokenizer.bos_token}user
Tell me how to fry an egg.
{tokenizer.eos_token}
{tokenizer.bos_token}assistant
"""
inputs = tokenizer(input_str, return_tensors="pt")
```
If you *really* want to use `<|im_start|>` and `<|im_end|>`, just update your `tokenizer_config.json` to use `<|im_start|>` instead of `<s>` and `<|im_end|>` instead of `</s>` and when tokenizing. And if you still don't like what I've done to this chat-ml-ish format, feel free to cry into your pillow or fork the code and do a new fine-tune.
### Llama-2 chat
```
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]
```
## Fine tuning
### SFT phase
An example for mistral-7b:
*Note: I actually used my fork of [qlora](https://github.com/jondurbin/qlora)'s `train.py` for this, but I'm porting it to a minified version here, not tested yet!*
*More notes: I stopped the SFT phase around 50% because of budget constraints.*
```bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.json
```
Deepspeed configuration:
```json
{
"gradient_accumulation_steps": "auto",
"gradient_clipping": "auto",
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"bf16": {
"enabled": true
},
"zero_optimization": {
"stage": 2,
"contiguous_gradients": true,
"overlap_comm": true,
"reduce_scatter": true,
"reduce_bucket_size": 5e8,
"allgather_bucket_size": 5e8
}
}
```
### DPO phase
An example of the DPO phase for mistral-7b (requires first running the SFT):
```bash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5
``` |
JAILHJH/TRTRT | JAILHJH | "2023-12-13T17:11:57Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2023-12-13T17:11:57Z" | ---
license: openrail
---
|
Tsuinzues/tori | Tsuinzues | "2023-12-13T17:15:26Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2023-12-13T17:15:05Z" | ---
license: openrail
---
|
brendenbogi/idk | brendenbogi | "2023-12-16T06:09:01Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-12-13T17:15:30Z" | Entry not found |
JeskoR/mistral_b_finance_finetuned_test | JeskoR | "2023-12-14T08:58:45Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-v0.1",
"region:us"
] | null | "2023-12-13T17:19:20Z" | ---
library_name: peft
base_model: mistralai/Mistral-7B-v0.1
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0 |
erbacher/vae-burgers-norevin | erbacher | "2023-12-14T08:40:18Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"pdetokenizer",
"endpoints_compatible",
"region:us"
] | null | "2023-12-13T17:19:30Z" | Entry not found |
Santiclibrain/mixtral_orca_spanish_adapter | Santiclibrain | "2023-12-16T07:57:07Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mistralai/Mixtral-8x7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | "2023-12-13T17:21:48Z" | ---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: mistralai/Mixtral-8x7B-v0.1
model-index:
- name: mixtral_no_robots_secondtry
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mixtral_no_robots_secondtry
This model is a fine-tuned version of [mistralai/Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9807
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 8
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.0635 | 0.02 | 1000 | 1.1332 |
| 0.9311 | 0.03 | 2000 | 1.1109 |
| 0.9417 | 0.05 | 3000 | 1.0926 |
| 1.0411 | 0.06 | 4000 | 1.0809 |
| 0.9516 | 0.08 | 5000 | 1.0786 |
| 1.0107 | 0.09 | 6000 | 1.0726 |
| 1.0698 | 0.11 | 7000 | 1.0666 |
| 1.1083 | 0.13 | 8000 | 1.0638 |
| 0.9148 | 0.14 | 9000 | 1.0589 |
| 0.957 | 0.16 | 10000 | 1.0565 |
| 1.0063 | 0.17 | 11000 | 1.0531 |
| 0.9831 | 0.19 | 12000 | 1.0509 |
| 1.0826 | 0.2 | 13000 | 1.0490 |
| 0.9598 | 0.22 | 14000 | 1.0518 |
| 0.8066 | 0.23 | 15000 | 1.0453 |
| 0.8795 | 0.25 | 16000 | 1.0431 |
| 1.1402 | 0.27 | 17000 | 1.0442 |
| 1.0652 | 0.28 | 18000 | 1.0428 |
| 0.93 | 0.3 | 19000 | 1.0371 |
| 0.9727 | 0.31 | 20000 | 1.0344 |
| 1.0753 | 0.33 | 21000 | 1.0339 |
| 0.9498 | 0.34 | 22000 | 1.0303 |
| 0.6971 | 0.36 | 23000 | 1.0316 |
| 0.9259 | 0.38 | 24000 | 1.0298 |
| 1.0359 | 0.39 | 25000 | 1.0284 |
| 1.1883 | 0.41 | 26000 | 1.0273 |
| 0.8642 | 0.42 | 27000 | 1.0250 |
| 0.9147 | 0.44 | 28000 | 1.0226 |
| 0.7824 | 0.45 | 29000 | 1.0237 |
| 0.8319 | 0.47 | 30000 | 1.0219 |
| 0.9443 | 0.49 | 31000 | 1.0190 |
| 0.9103 | 0.5 | 32000 | 1.0166 |
| 0.8903 | 0.52 | 33000 | 1.0149 |
| 1.0509 | 0.53 | 34000 | 1.0148 |
| 1.0008 | 0.55 | 35000 | 1.0151 |
| 0.778 | 0.56 | 36000 | 1.0106 |
| 0.7957 | 0.58 | 37000 | 1.0090 |
| 0.8679 | 0.6 | 38000 | 1.0085 |
| 1.064 | 0.61 | 39000 | 1.0064 |
| 0.823 | 0.63 | 40000 | 1.0061 |
| 0.9117 | 0.64 | 41000 | 1.0047 |
| 0.8284 | 0.66 | 42000 | 1.0019 |
| 0.9345 | 0.67 | 43000 | 1.0012 |
| 0.9854 | 0.69 | 44000 | 1.0004 |
| 0.7631 | 0.7 | 45000 | 0.9989 |
| 0.7189 | 0.72 | 46000 | 0.9979 |
| 0.9386 | 0.74 | 47000 | 0.9952 |
| 1.011 | 0.75 | 48000 | 0.9943 |
| 0.9627 | 0.77 | 49000 | 0.9941 |
| 1.1317 | 0.78 | 50000 | 0.9923 |
| 1.0506 | 0.8 | 51000 | 0.9912 |
| 0.8596 | 0.81 | 52000 | 0.9894 |
| 0.9702 | 0.83 | 53000 | 0.9889 |
| 1.0198 | 0.85 | 54000 | 0.9875 |
| 1.1125 | 0.86 | 55000 | 0.9862 |
| 0.9356 | 0.88 | 56000 | 0.9862 |
| 0.7212 | 0.89 | 57000 | 0.9852 |
| 0.974 | 0.91 | 58000 | 0.9843 |
| 0.9369 | 0.92 | 59000 | 0.9829 |
| 0.938 | 0.94 | 60000 | 0.9826 |
| 0.8011 | 0.96 | 61000 | 0.9818 |
| 0.7937 | 0.97 | 62000 | 0.9811 |
| 0.9679 | 0.99 | 63000 | 0.9807 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.0
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0 |
danielssj88/platzi-vit-model-omar-espejel | danielssj88 | "2023-12-13T17:27:21Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-12-13T17:27:21Z" | Entry not found |
chaosmonk/ag2 | chaosmonk | "2023-12-13T17:27:31Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-12-13T17:27:31Z" | Entry not found |
hkivancoral/smids_3x_beit_base_sgd_0001_fold2 | hkivancoral | "2023-12-13T18:15:58Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"beit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/beit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2023-12-13T17:28:13Z" | ---
license: apache-2.0
base_model: microsoft/beit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: smids_3x_beit_base_sgd_0001_fold2
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.7886855241264559
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_3x_beit_base_sgd_0001_fold2
This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5470
- Accuracy: 0.7887
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.1643 | 1.0 | 225 | 1.2557 | 0.3494 |
| 1.1336 | 2.0 | 450 | 1.1964 | 0.3727 |
| 1.0702 | 3.0 | 675 | 1.1415 | 0.3960 |
| 1.0744 | 4.0 | 900 | 1.0897 | 0.4226 |
| 0.9272 | 5.0 | 1125 | 1.0392 | 0.4526 |
| 0.9348 | 6.0 | 1350 | 0.9924 | 0.4908 |
| 0.9221 | 7.0 | 1575 | 0.9474 | 0.5374 |
| 0.8806 | 8.0 | 1800 | 0.9069 | 0.5890 |
| 0.8541 | 9.0 | 2025 | 0.8693 | 0.6206 |
| 0.8102 | 10.0 | 2250 | 0.8367 | 0.6439 |
| 0.7893 | 11.0 | 2475 | 0.8072 | 0.6672 |
| 0.7786 | 12.0 | 2700 | 0.7812 | 0.6872 |
| 0.7601 | 13.0 | 2925 | 0.7581 | 0.7038 |
| 0.7654 | 14.0 | 3150 | 0.7376 | 0.7105 |
| 0.7556 | 15.0 | 3375 | 0.7195 | 0.7171 |
| 0.7319 | 16.0 | 3600 | 0.7031 | 0.7321 |
| 0.6868 | 17.0 | 3825 | 0.6881 | 0.7354 |
| 0.7278 | 18.0 | 4050 | 0.6745 | 0.7421 |
| 0.6222 | 19.0 | 4275 | 0.6623 | 0.7454 |
| 0.6905 | 20.0 | 4500 | 0.6515 | 0.7471 |
| 0.6715 | 21.0 | 4725 | 0.6419 | 0.7554 |
| 0.7342 | 22.0 | 4950 | 0.6326 | 0.7554 |
| 0.6844 | 23.0 | 5175 | 0.6245 | 0.7621 |
| 0.6577 | 24.0 | 5400 | 0.6173 | 0.7654 |
| 0.6177 | 25.0 | 5625 | 0.6101 | 0.7687 |
| 0.647 | 26.0 | 5850 | 0.6037 | 0.7671 |
| 0.6355 | 27.0 | 6075 | 0.5976 | 0.7704 |
| 0.6059 | 28.0 | 6300 | 0.5926 | 0.7704 |
| 0.5954 | 29.0 | 6525 | 0.5873 | 0.7770 |
| 0.6256 | 30.0 | 6750 | 0.5829 | 0.7787 |
| 0.6261 | 31.0 | 6975 | 0.5789 | 0.7820 |
| 0.5804 | 32.0 | 7200 | 0.5748 | 0.7820 |
| 0.5936 | 33.0 | 7425 | 0.5711 | 0.7854 |
| 0.5647 | 34.0 | 7650 | 0.5682 | 0.7854 |
| 0.6238 | 35.0 | 7875 | 0.5657 | 0.7854 |
| 0.5976 | 36.0 | 8100 | 0.5630 | 0.7854 |
| 0.5852 | 37.0 | 8325 | 0.5605 | 0.7870 |
| 0.5826 | 38.0 | 8550 | 0.5584 | 0.7854 |
| 0.5619 | 39.0 | 8775 | 0.5564 | 0.7854 |
| 0.5946 | 40.0 | 9000 | 0.5547 | 0.7870 |
| 0.5381 | 41.0 | 9225 | 0.5529 | 0.7870 |
| 0.5966 | 42.0 | 9450 | 0.5514 | 0.7870 |
| 0.588 | 43.0 | 9675 | 0.5504 | 0.7870 |
| 0.5705 | 44.0 | 9900 | 0.5494 | 0.7854 |
| 0.6073 | 45.0 | 10125 | 0.5486 | 0.7870 |
| 0.5915 | 46.0 | 10350 | 0.5480 | 0.7887 |
| 0.5988 | 47.0 | 10575 | 0.5476 | 0.7887 |
| 0.542 | 48.0 | 10800 | 0.5472 | 0.7887 |
| 0.5885 | 49.0 | 11025 | 0.5471 | 0.7887 |
| 0.5585 | 50.0 | 11250 | 0.5470 | 0.7887 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
shirzady1934/bert-riddle-finetuned_2choice | shirzady1934 | "2023-12-13T17:29:23Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"multiple-choice",
"mhs",
"generated_from_trainer",
"en",
"base_model:bert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | multiple-choice | "2023-12-13T17:29:06Z" | ---
language:
- en
license: apache-2.0
base_model: bert-base-uncased
tags:
- mhs
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert_base_uncased
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_base_uncased
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the WP dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5591
- Accuracy: 0.8500
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 23 | 0.5552 | 0.8250 |
| No log | 2.0 | 46 | 0.4623 | 0.8250 |
| No log | 3.0 | 69 | 0.5304 | 0.8250 |
| No log | 4.0 | 92 | 0.5741 | 0.8500 |
| No log | 5.0 | 115 | 0.5591 | 0.8500 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
EmmaGthn/results_lora_40_5000_bias | EmmaGthn | "2023-12-13T20:25:33Z" | 0 | 0 | null | [
"tensorboard",
"safetensors",
"region:us"
] | null | "2023-12-13T17:30:28Z" | Entry not found |
hkivancoral/smids_3x_beit_base_rms_001_fold2 | hkivancoral | "2023-12-13T18:18:58Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"beit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/beit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2023-12-13T17:30:32Z" | ---
license: apache-2.0
base_model: microsoft/beit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: smids_3x_beit_base_rms_001_fold2
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.7737104825291181
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_3x_beit_base_rms_001_fold2
This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8871
- Accuracy: 0.7737
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.1063 | 1.0 | 225 | 1.1925 | 0.3627 |
| 0.8848 | 2.0 | 450 | 0.8623 | 0.5557 |
| 0.9929 | 3.0 | 675 | 0.7924 | 0.5774 |
| 0.7922 | 4.0 | 900 | 0.7743 | 0.5973 |
| 0.7804 | 5.0 | 1125 | 0.7554 | 0.5940 |
| 0.7536 | 6.0 | 1350 | 0.7911 | 0.5740 |
| 0.7389 | 7.0 | 1575 | 0.8973 | 0.5524 |
| 0.8004 | 8.0 | 1800 | 0.7349 | 0.6140 |
| 0.7283 | 9.0 | 2025 | 0.7228 | 0.6356 |
| 0.7381 | 10.0 | 2250 | 0.7154 | 0.6389 |
| 0.8566 | 11.0 | 2475 | 0.7154 | 0.6373 |
| 0.725 | 12.0 | 2700 | 0.6853 | 0.6539 |
| 0.7139 | 13.0 | 2925 | 0.6833 | 0.6722 |
| 0.708 | 14.0 | 3150 | 0.7156 | 0.6489 |
| 0.6892 | 15.0 | 3375 | 0.6841 | 0.6955 |
| 0.7392 | 16.0 | 3600 | 0.6648 | 0.6905 |
| 0.7123 | 17.0 | 3825 | 0.6864 | 0.6689 |
| 0.6752 | 18.0 | 4050 | 0.6534 | 0.7088 |
| 0.7193 | 19.0 | 4275 | 0.7054 | 0.6755 |
| 0.6734 | 20.0 | 4500 | 0.6500 | 0.6855 |
| 0.649 | 21.0 | 4725 | 0.6222 | 0.6872 |
| 0.7173 | 22.0 | 4950 | 0.6280 | 0.7321 |
| 0.6723 | 23.0 | 5175 | 0.6016 | 0.7587 |
| 0.6406 | 24.0 | 5400 | 0.6206 | 0.7221 |
| 0.6216 | 25.0 | 5625 | 0.6173 | 0.7338 |
| 0.6154 | 26.0 | 5850 | 0.5917 | 0.7488 |
| 0.6137 | 27.0 | 6075 | 0.6327 | 0.7304 |
| 0.597 | 28.0 | 6300 | 0.6319 | 0.7155 |
| 0.6292 | 29.0 | 6525 | 0.6003 | 0.7321 |
| 0.615 | 30.0 | 6750 | 0.5967 | 0.7554 |
| 0.5842 | 31.0 | 6975 | 0.5866 | 0.7587 |
| 0.5976 | 32.0 | 7200 | 0.5968 | 0.7388 |
| 0.5096 | 33.0 | 7425 | 0.5717 | 0.7671 |
| 0.4883 | 34.0 | 7650 | 0.5888 | 0.7804 |
| 0.5258 | 35.0 | 7875 | 0.6027 | 0.7820 |
| 0.49 | 36.0 | 8100 | 0.6052 | 0.7820 |
| 0.5271 | 37.0 | 8325 | 0.5944 | 0.7654 |
| 0.4464 | 38.0 | 8550 | 0.6867 | 0.7504 |
| 0.3796 | 39.0 | 8775 | 0.6032 | 0.7820 |
| 0.4175 | 40.0 | 9000 | 0.6446 | 0.7704 |
| 0.3633 | 41.0 | 9225 | 0.6564 | 0.7804 |
| 0.4496 | 42.0 | 9450 | 0.6467 | 0.7770 |
| 0.2811 | 43.0 | 9675 | 0.6703 | 0.7754 |
| 0.3066 | 44.0 | 9900 | 0.7311 | 0.7754 |
| 0.3558 | 45.0 | 10125 | 0.7685 | 0.7787 |
| 0.2645 | 46.0 | 10350 | 0.7874 | 0.7754 |
| 0.2214 | 47.0 | 10575 | 0.8226 | 0.7737 |
| 0.2321 | 48.0 | 10800 | 0.8600 | 0.7704 |
| 0.314 | 49.0 | 11025 | 0.8728 | 0.7770 |
| 0.1915 | 50.0 | 11250 | 0.8871 | 0.7737 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
Trelis/SUS-Chat-34B-function-calling-v3 | Trelis | "2024-01-05T15:14:06Z" | 0 | 5 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"yi",
"long context",
"commercial use",
"gptq",
"function-calling",
"function calling",
"conversational",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-12-13T17:32:17Z" | ---
license: other
widget:
- example_title: SUS-Chat
text: hi
output:
text: ' Hello! How can I assist you today?'
pipeline_tag: text-generation
tags:
- yi
- long context
- commercial use
- gptq
- function-calling
- function calling
extra_gated_prompt: "Purchase access to this repo [HERE](https://buy.stripe.com/6oE9Bmg8t1Dt1ck9BL)!"
---
# Function Calling Fine-tuned Yi Chat 200k Context
Purchase access to this model [here](https://buy.stripe.com/6oE9Bmg8t1Dt1ck9BL).
This model is fine-tuned for function calling.
- The function metadata format is the same as used for OpenAI.
- The model is suitable for commercial use.
- See the 'gptq' branch for the GPTQ model.
- AWQ and GGUF are available on request after purchase.
Check out other fine-tuned function calling models [here](https://trelis.com/function-calling/).
## Quick Server Setup
Runpod one click template, TGI API with EETQ (8bit) [here](https://runpod.io/gsc?template=p5zxy64o61&ref=jmfkcdio). You must add a HuggingFace Hub access token (HUGGING_FACE_HUB_TOKEN) to the environment variables as this is a gated model.
Runpod one click template, vLLM API with AWQ (4bit) [here](https://runpod.io/gsc?template=no46bznoof&ref=jmfkcdio). You must add a HuggingFace Hub access token (HUGGING_FACE_HUB_TOKEN) to the environment variables as this is a gated model.
Runpod Affiliate [Link](https://runpod.io?ref=jmfkcdio) (helps support the Trelis channel).
## Inference Scripts
See below for sample prompt format.
Complete inference scripts are available for purchase [here](https://trelis.com/enterprise-server-api-and-inference-guide/):
- Easily format prompts using tokenizer.apply_chat_format (starting from openai formatted functions and a list of messages)
- Automate catching, handling and chaining of function calls.
## Prompt Format
```
B_FUNC, E_FUNC = "You have access to the following functions. Use them if required:\n\n", "\n\n"
B_INST, E_INST = "### Human: ", "\n\n### Assistant: " #SUSChat
prompt = f"{B_INST}{B_FUNC}{functionList.strip()}{E_FUNC}{user_prompt.strip()}{E_INST}\n\n"
```
### Using tokenizer.apply_chat_template
For an easier application of the prompt, you can set up as follows:
Set up `messages`:
```
[
{
"role": "function_metadata",
"content": "FUNCTION_METADATA"
},
{
"role": "user",
"content": "What is the current weather in London?"
},
{
"role": "function_call",
"content": "{\n \"name\": \"get_current_weather\",\n \"arguments\": {\n \"city\": \"London\"\n }\n}"
},
{
"role": "function_response",
"content": "{\n \"temperature\": \"15 C\",\n \"condition\": \"Cloudy\"\n}"
},
{
"role": "assistant",
"content": "The current weather in London is Cloudy with a temperature of 15 Celsius"
}
]
```
with `FUNCTION_METADATA` as:
```
[
{
"type": "function",
"function": {
"name": "get_current_weather",
"description": "This function gets the current weather in a given city",
"parameters": {
"type": "object",
"properties": {
"city": {
"type": "string",
"description": "The city, e.g., San Francisco"
},
"format": {
"type": "string",
"enum": ["celsius", "fahrenheit"],
"description": "The temperature unit to use."
}
},
"required": ["city"]
}
}
},
{
"type": "function",
"function": {
"name": "get_clothes",
"description": "This function provides a suggestion of clothes to wear based on the current weather",
"parameters": {
"type": "object",
"properties": {
"temperature": {
"type": "string",
"description": "The temperature, e.g., 15 C or 59 F"
},
"condition": {
"type": "string",
"description": "The weather condition, e.g., 'Cloudy', 'Sunny', 'Rainy'"
}
},
"required": ["temperature", "condition"]
}
}
}
]
```
and then apply the chat template to get a formatted prompt:
```
tokenizer = AutoTokenizer.from_pretrained('Trelis/SUS-Chat-34B-function-calling-v3', trust_remote_code=True)
prompt = tokenizer.apply_chat_template(prompt, tokenize=False)
```
If you are using a gated model, you need to first run:
```
pip install huggingface_hub
huggingface-cli login
```
### Manual Prompt:
```
Human: You have access to the following functions. Use them if required:
[
{
"type": "function",
"function": {
"name": "get_stock_price",
"description": "Get the stock price of an array of stocks",
"parameters": {
"type": "object",
"properties": {
"names": {
"type": "array",
"items": {
"type": "string"
},
"description": "An array of stocks"
}
},
"required": [
"names"
]
}
}
},
{
"type": "function",
"function": {
"name": "get_big_stocks",
"description": "Get the names of the largest N stocks by market cap",
"parameters": {
"type": "object",
"properties": {
"number": {
"type": "integer",
"description": "The number of largest stocks to get the names of, e.g. 25"
},
"region": {
"type": "string",
"description": "The region to consider, can be \"US\" or \"World\"."
}
},
"required": [
"number"
]
}
}
}
]
Get the names of the five largest stocks by market cap Assistant:
{
"name": "get_big_stocks",
"arguments": {
"number": 5
}
}<|endoftext|>
```
# Dataset
See [Trelis/function_calling_v3](https://huggingface.co/datasets/Trelis/function_calling_v3).
# License
This model may be used commercially for inference according to the terms of the Yi license, or for further fine-tuning and inference. Users may not re-publish or re-sell this model in the same or derivative form (including fine-tunes).
**
The SFT chat fine-tuned model's repo card follows below.
**
# 🐷SUS-Chat: Instruction tuning done right
<p align="left">
<a href="README_CN.md">中文</a>  |  English 
</p>
<br><br>
<div align="center">
<p align="center">
<img src="https://github.com/SUSTech-IDEA/SUS-Chat/raw/main/assets/sustech.svg?sanitize=true" width="200px">
<img src="https://github.com/SUSTech-IDEA/SUS-Chat/raw/main/assets/ccnl.png?sanitize=true" width="200px">
</p>
<div style="display: inline-block;">
<a rel="noopener nofollow" href="https://github.com/SUSTech-IDEA/SUS-Chat/issues">
<img src="https://img.shields.io/github/issues/SUSTech-IDEA/SUS-Chat?logo=github" style="margin: 0 0;">
</a>
</div>
<div style="display: inline-block;">
<a href="https://huggingface.co/SUSTech">
<img src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-SUSTech-blue" style="margin: 0 0;">
</a>
</div>
<div style="display: inline-block;">
<a rel="noopener nofollow" href="https://www.modelscope.cn/organization/sustc/">
<img src="https://img.shields.io/badge/🤖ModelScope-sustc-blue" style="margin: 0 0;">
</a>
</div>
<a href="https://wisemodel.cn/organization/SUSTech">
<img src="https://img.shields.io/badge/WiseModel-SUSTech-blue"> </a>
<div style="display: inline-block;">
<a rel="noopener nofollow" href="https://github.com/SUSTech-IDEA/SUS-Chat/blob/main/LICENSE">
<img src="https://img.shields.io/badge/Code_License-Apache_2.0-lightblue" style="margin: 0 0;">
</a>
</div>
<div style="display: inline-block;">
<a rel="noopener nofollow" href="https://github.com/01-ai/Yi/blob/main/MODEL_LICENSE_AGREEMENT.txt">
<img src="https://img.shields.io/badge/Model_License-Model_Agreement-lightblue" style="margin: 0 0;">
</a>
</div>
<div style="display: inline-block;">
<a rel="noopener nofollow" href="mailto:oss@data.sustech.edu.cn">
<img src="https://img.shields.io/badge/✉️-data@sustech.edu.cn-FFE01B" style="margin: 0 0;">
</a>
</div>
</div>
# News
- 2023-12-09: 🔥 `Tigerbot` variant has been
[deleted](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard/discussions/438),
`SUS-Chat-34B` is now the the top-ranked LLaMA model and the
top-ranked chat model.
- 2023-12-07: SUS-Chat-34B is now available on
[WiseModel🧠](https://wisemodel.cn/model/SUSTech/SUS-Chat-34B).
- 2023-12-06: Try [SUS-Chat-34B
chat-ui](https://huggingface.co/spaces/SUSTech/SUS-Chat-34B).
- 2023-12-05: SUS-Chat-34B is now available on
[ModelScope🤖](https://www.modelscope.cn/models/SUSTC/SUS-Chat-34B/summary)
- 2023-12-05: SUS-Chat-34B is ranked 2nd in [Open LLM
leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
and surpassed all models under 70B.
- 2023-12-01: SUS-Chat-34B is now available on
[HuggingFace🤗](https://huggingface.co/SUSTech/SUS-Chat-34B).
# Introduction
<img src="https://hackmd.io/_uploads/HJlDtzhBa.png" id="fig-sus"
alt="Figure 1: DALL·E 2023-12-01 11.03.28 - An imposing, majestic wild boar combined with elements of a futuristic transformer robot. The boar itself should be intricately blended with these tra" />
**SUS-Chat-34B** is a 34B bilingual Chinese-English dialogue model,
jointly released by the **[Southern University of Science and
Technology](https://huggingface.co/SUSTech)** and
**[IDEA-CCNL](https://huggingface.co/IDEA-CCNL)**. This model is based
on [`01-ai/Yi-34B`](https://huggingface.co/01-ai/Yi-34B) and has been
fine-tuned on millions of high-quality, multilingual instruction data.
While maintaining the strong language capabilities of the base model,
the SUS-Chat-34B model has improved the model’s response to human
instructions through high-quality instruction fine-tuning and excels at
imitating human thought processes through chains of thought. It
introduces inter-instruction attention sharing in long texts, expanding
the window size from 4K to 8K, significantly enhancing the usability of
multi-turn dialogues.
It has surpassed all models of the same size in almost all benchmark
tests and is better suited to meet the practical needs of complex
multilingual tasks. Compared to larger models, SUS-Chat-34B remains
highly competitive and has achieved state-of-the-art performance in our
comprehensive evaluations.
SUS-Chat-34B model has the following highlights:
1. Large-scale complex instruction following data: Trained with 1.4
billion tokens of high-quality complex instruction data, covering
Chinese and English, multi-turn dialogues, mathematics, reasoning,
and various other types of instruction data;
2. Strong performance in general tasks: The SUS-Chat-34B model excels
in numerous mainstream Chinese and English tasks, surpassing other
open-source instruction fine-tuned models of the same parameter
scale. It also competes well against models with larger parameter
scales;
3. Longer context window and excellent multi-turn dialogue
capabilities: Currently, SUS-Chat-34B supports an 8K context window,
and is trained with a large amount of multi-turn instruction and
single-multi-turn mixed data, demonstrating remarkable capabilities
in long-text dialogue information focus and instruction follow-up.
SUS-Chat powerfully demonstrates that through the right instruction
fine-tuning, academic institutions can achieve better performance
without increasing model parameters, using open-source datasets and
models. This bridges the gap between academia and industry in large
language models and opens new possibilities for collaboration between
academic and industrial sectors.
# Performance
To better evaluate the performance of the SUS-Chat-34B model, we
conducted assessments across multiple benchmark tests and have
open-sourced the evaluation framework
[TLEM](https://huggingface.co/spaces/SUSTech/tlem) to facilitate
replication and comparison by other researchers.
In TLEM, we utilized various benchmark tests including MMLU, CMMLU,
C-Eval, BBH, GSM-8K, and MATH, to measure the model’s knowledge and
thinking capabilities. In these metrics, the SUS-Chat-34B model achieved
state-of-the-art performance. Additionally, we incorporated
[lm-eval](https://github.com/EleutherAI/lm-evaluation-harness) to test
SUS-Chat and similar models on winogrande, hellaswag, arc, and
truthful-qa, assessing the model’s common-sense reasoning ability and
susceptibility to illusions.
Overall, the SUS-Chat-34B model significantly outperformed models of
similar scale and achieved the most advanced comprehensive performance.
<img
src="https://github.com/SUSTech-IDEA/SUS-Chat/raw/main/assets/radar.png"
id="fig-bench" alt="Figure 2: Benchmark" />
<div>
<table>
<colgroup>
<col style="width: 50%" />
<col style="width: 50%" />
</colgroup>
<tbody>
<tr class="odd">
<td style="text-align: center;"><div width="50.0%"
data-layout-align="center">
<h2 id="english-understanding">English Understanding</h2>
<table>
<thead>
<tr class="header">
<th style="text-align: right;">Model</th>
<th style="text-align: center;">mmlu (0-shot)</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td style="text-align: right;">GPT-4</td>
<td style="text-align: center;">83</td>
</tr>
<tr class="even">
<td style="text-align: right;">SUS-Chat-34B</td>
<td style="text-align: center;"><u>74.35</u></td>
</tr>
<tr class="odd">
<td style="text-align: right;">Qwen-72b-Chat</td>
<td style="text-align: center;"><strong>74.52</strong></td>
</tr>
<tr class="even">
<td style="text-align: right;">Deepseek-68b-Chat</td>
<td style="text-align: center;">69.43</td>
</tr>
<tr class="odd">
<td style="text-align: right;">OrionStar-Yi-34B-Chat</td>
<td style="text-align: center;">68.51</td>
</tr>
<tr class="even">
<td style="text-align: right;">Yi-34B-Chat</td>
<td style="text-align: center;">66.96</td>
</tr>
</tbody>
</table>
</div></td>
<td style="text-align: center;"><div width="50.0%"
data-layout-align="center">
<h2 id="chinese-capabilities">Chinese Capabilities</h2>
<table>
<colgroup>
<col style="width: 34%" />
<col style="width: 32%" />
<col style="width: 32%" />
</colgroup>
<thead>
<tr class="header">
<th style="text-align: right;">Model</th>
<th style="text-align: center;">cmmlu (0-shot)</th>
<th style="text-align: center;">C-Eval (0-shot)<a href="#fn1"
class="footnote-ref" id="fnref1"
role="doc-noteref"><sup>1</sup></a></th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td style="text-align: right;">GPT-4</td>
<td style="text-align: center;">71</td>
<td style="text-align: center;">69.9</td>
</tr>
<tr class="even">
<td style="text-align: right;">SUS-Chat-34B</td>
<td style="text-align: center;"><strong>78.68</strong></td>
<td style="text-align: center;"><strong>82.42</strong></td>
</tr>
<tr class="odd">
<td style="text-align: right;">Qwen-72b-Chat</td>
<td style="text-align: center;"><u>77.02</u></td>
<td style="text-align: center;"><u>77.22</u></td>
</tr>
<tr class="even">
<td style="text-align: right;">Deepseek-68b-Chat</td>
<td style="text-align: center;">48.51</td>
<td style="text-align: center;">59.7</td>
</tr>
<tr class="odd">
<td style="text-align: right;">OrionStar-Yi-34B-Chat</td>
<td style="text-align: center;">66.88</td>
<td style="text-align: center;">65.13</td>
</tr>
<tr class="even">
<td style="text-align: right;">Yi-34B-Chat</td>
<td style="text-align: center;">55.16</td>
<td style="text-align: center;">77.16</td>
</tr>
</tbody>
</table>
</div></td>
</tr>
</tbody>
</table>
<section id="footnotes" class="footnotes footnotes-end-of-document"
role="doc-endnotes">
<hr />
<ol>
<li id="fn1"><p>C-Eval results are evaluated on the validation
datasets<a href="#fnref1" class="footnote-back"
role="doc-backlink">↩︎</a></p></li>
</ol>
</section>
</div>
## Math & Reasoning
| Model | gsm8k (0-shot) | MATH (0-shot) | BBH (0-shot) |
|----------------------:|:--------------:|:-------------:|:------------:|
| GPT-4 | 91.4 | 45.8 | 86.7 |
| SUS-Chat-34B | **80.06** | 28.7 | 67.62 |
| Qwen-72b-Chat | <u>76.57</u> | **35.9** | **72.63** |
| Deepseek-68b-Chat | 74.45 | <u>29.56</u> | <u>69.73</u> |
| OrionStar-Yi-34B-Chat | 54.36 | 12.8 | 62.88 |
| Yi-34B-Chat | 63.76 | 10.02 | 61.54 |
## More Tasks
| Model | winogrande (5-shot) | arc (25-shot) | hellaswag (10-shot) | TruthfulQA mc1 (0-shot) | TruthfulQA mc2 (0-shot) |
|----------------------:|:-------------------:|:-------------:|:-------------------:|:-----------------------:|:-----------------------:|
| GPT-4 | — | 94.5 | 91.4 | 59.00 | — |
| SUS-Chat-34B | **81.22** | <u>81.54</u> | 83.79 | **40.64** | **57.47** |
| Qwen-72b-Chat | 76.09 | **82.10** | <u>86.06</u> | 39.17 | <u>56.37</u> |
| Deepseek-68b-Chat | <u>80.58</u> | 81.29 | **87.02** | <u>40.02</u> | 50.64 |
| OrionStar-Yi-34B-Chat | 77.27 | 80.19 | 84.54 | 36.47 | 53.24 |
| Yi-34B-Chat | 76.64 | 70.66 | 82.29 | 38.19 | 54.57 |
## Overall
| Model | Average |
|----------------------:|:---------:|
| SUS-Chat-34B | **69.05** |
| Qwen-72b-Chat | 68.41 |
| Deepseek-68b-Chat | 62.91 |
| OrionStar-Yi-34B-Chat | 60.21 |
| Yi-34B-Chat | 59.72 |
To reproduce the results, please start a corresponding vllm server and
refer to
[here](https://sustech-tlem.static.hf.space/index.html#start-evaluating-your-model-in-3-line).
# Usage
SUS-Chat-34B is a standard LLaMA model and should be seamlessly
compatible with the LLaMA ecosystem. We provide the following example to
demonstrate how it can be used for multi-turn dialogues.
Feel free to [open an
issue](https://github.com/SUSTech-IDEA/SUS-Chat/issues) if you have any
questions.
``` python
from transformers import AutoModelForCausalLM, AutoTokenizer # 🤗 Transformers, or
# from modelscope import AutoModelForCausalLM, AutoTokenizer # 🤖 ModelScope
def chat_template(messages):
history = ""
for message in messages:
match message:
case {"role": "user", "content": message}:
history += f"### Human: {message}\n\n### Assistant: "
case {"role": "assistant", "content": message}:
history += message
return history
model_path = "SUSTech/SUS-Chat-34B"
# model_path = "SUSTC/SUS-Chat-34B" # ModelScope
tokenizer = AutoTokenizer.from_pretrained(model_path, use_fast=False)
model = AutoModelForCausalLM.from_pretrained(
model_path, device_map="auto", torch_dtype="auto"
).eval()
messages = [{"role": "user", "content": "hi"}]
input_ids = tokenizer.encode(
chat_template(messages), return_tensors="pt", add_special_tokens=False
).to("cuda")
output_ids = model.generate(input_ids.to("cuda"), max_length=256)
response = tokenizer.decode(
output_ids[0][input_ids.shape[1] :], skip_special_tokens=False
)
messages.append({"role": "assistant", "content": response})
# Second round
messages.append({"role": "user", "content": "What is the capital of China?"})
input_ids = tokenizer.encode(
chat_template(messages), return_tensors="pt", add_special_tokens=False
).to("cuda")
output_ids = model.generate(input_ids.to("cuda"), max_length=256)
response = tokenizer.decode(
output_ids[0][input_ids.shape[1] :], skip_special_tokens=False
)
messages.append({"role": "assistant", "content": response})
```
# Limitations
SUS-Chat has only undergone supervised fine-tuning and has not yet been
trained on human preference learning. As a result, it may produce
unreasonable responses in some situations and exacerbate existing issues
in language models, including hallucinations, non-determinism, and
cumulative errors. To achieve better performance for downstream tasks,
we recommend adjusting the generation configuration parameters
accordingly.
# Disclaimer
During the training process, we used data compliance check algorithms to
ensure the compliance of the training model as much as possible. Due to
the complexity of the data and the diverse use cases of language models,
we cannot guarantee that the model will produce correct and reasonable
outputs in all scenarios. Please be aware that there is still a risk of
the model generating problematic outputs. We will not be responsible for
any risks or issues arising from misuse, misguidance, illegal use, and
related misinformation, as well as data security issues related to the
model.
# License
This model is developed entirely for academic research and free
commercial use, but it must adhere to the
[license](https://github.com/01-ai/Yi/blob/main/MODEL_LICENSE_AGREEMENT.txt)
from [01-ai](https://huggingface.co/01-ai). |
OpenNMT/mixtral-onmt-awq-gemv | OpenNMT | "2023-12-22T15:51:33Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-12-13T17:32:33Z" |
This is the OpenNMT-py converted version of Mixtral 8x7b, 4-bit AWQ quantized.
The safetensors file is 24GB hence needs 2x24GB GPUs (3090 or 4090) or 1x48GB (A6000).
To run the model on 2 GPU the config file needs to have:
world_size: 2
gpu_ranks: [0, 1]
parallel_mode: "tensor_parallel"
If you are lucky to have a A6000 (or V/A/H100 with more than 32GB), then use:
world_size: 1
gpu_ranks: [0]
#parallel_mode: "tensor_parallel"
Command line to run is:
`python onmt/bin/translate.py --config /pathto/mixtral-inference-awq.yaml --src /pathto/input-vicuna.txt --output /pathto/mistral-output.txt`
Where for instance, input-vicuna.txt contains:
`USER:⦅newline⦆Show me some attractions in Boston.⦅newline⦆⦅newline⦆ASSISTANT:⦅newline⦆`
Output will be:
`Here are some attractions in Boston:⦅newline⦆⦅newline⦆1. Boston Common: This is a historic park located in the heart of Boston. It features a variety of attractions, including the Boston Common Fountain, the Boston Common Bandstand, and the Boston Common Carousel.⦅newline⦆⦅newline⦆2. Boston Public Garden: This is a historic park located in the heart of Boston. It features a variety of attractions, including the Boston Public Garden Fountain, the Boston Public Garden Bandstand, and the Boston Public Garden Carousel.⦅newline⦆⦅newline⦆3. Boston Museum of Fine Arts: This is a world-renowned art museum located in the heart of Boston. It features a variety of attractions, including the Boston Museum of Fine Arts Fountain, the Boston Museum of Fine Arts Bandstand, and the Boston Museum of Fine Arts Carousel.⦅newline⦆⦅newline⦆4. Boston Museum of Science: This is a world-renowned science museum located in the heart of Boston. It features a variety of attractions, including the Boston Museum of Science Fountain, the Boston Museum of Science Bandstand, and the Boston Museum of Science Carousel.⦅newline⦆⦅newline⦆5. Boston Museum of History: This is a world-renowned history museum located in the heart of Boston`
Installation instruction:
Visit: https://github.com/OpenNMT/OpenNMT-py
make sure you install flash-attn and autoawq
Enjoy
detailed MMLU scoring:
```
ACC-abstract_algebra: 0.3600
ACC-anatomy: 0.6444
ACC-astronomy: 0.7303
ACC-business_ethics: 0.6400
ACC-clinical_knowledge: 0.7283
ACC-college_biology: 0.8056
ACC-college_chemistry: 0.5300
ACC-college_computer_science: 0.5900
ACC-college_mathematics: 0.3700
ACC-college_medicine: 0.6936
ACC-college_physics: 0.4510
ACC-computer_security: 0.7900
ACC-conceptual_physics: 0.6468
ACC-econometrics: 0.5614
ACC-electrical_engineering: 0.6414
ACC-elementary_mathematics: 0.4630
ACC-formal_logic: 0.4524
ACC-global_facts: 0.4600
ACC-high_school_biology: 0.8000
ACC-high_school_chemistry: 0.5320
ACC-high_school_computer_science: 0.7400
ACC-high_school_european_history: 0.8121
ACC-high_school_geography: 0.8081
ACC-high_school_government_and_politics: 0.9275
ACC-high_school_macroeconomics: 0.6923
ACC-high_school_mathematics: 0.3667
ACC-high_school_microeconomics: 0.7731
ACC-high_school_physics: 0.4636
ACC-high_school_psychology: 0.8569
ACC-high_school_statistics: 0.5278
ACC-high_school_us_history: 0.8431
ACC-high_school_world_history: 0.8650
ACC-human_aging: 0.7175
ACC-human_sexuality: 0.7710
ACC-international_law: 0.8347
ACC-jurisprudence: 0.7778
ACC-logical_fallacies: 0.7791
ACC-machine_learning: 0.5357
ACC-management: 0.7767
ACC-marketing: 0.9145
ACC-medical_genetics: 0.7100
ACC-miscellaneous: 0.8404
ACC-moral_disputes: 0.7775
ACC-moral_scenarios: 0.4112
ACC-nutrition: 0.7876
ACC-philosophy: 0.7492
ACC-prehistory: 0.7963
ACC-professional_accounting: 0.5177
ACC-professional_law: 0.5111
ACC-professional_medicine: 0.7390
ACC-professional_psychology: 0.7304
ACC-public_relations: 0.6727
ACC-security_studies: 0.7061
ACC-sociology: 0.8706
ACC-us_foreign_policy: 0.9100
ACC-virology: 0.5060
ACC-world_religions: 0.8538
ACC-all: 0.6707
[2023-12-22 16:35:03,999 INFO] total run time 7156.16
```
|
mahsamassoud/mnist-6 | mahsamassoud | "2024-02-02T06:57:34Z" | 0 | 0 | null | [
"tensorboard",
"region:us"
] | null | "2023-12-13T17:33:12Z" | Entry not found |
RUXHIR2828/laroi | RUXHIR2828 | "2023-12-13T17:39:03Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2023-12-13T17:37:10Z" | ---
license: openrail
---
|
JugalOza/ReinforceCartpole1 | JugalOza | "2023-12-13T17:40:21Z" | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | "2023-12-13T17:40:08Z" | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: ReinforceCartpole1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 467.13 +/- 75.07
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
miweru/ochat3-5_schwurpus_merged | miweru | "2023-12-13T19:19:46Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2023-12-13T17:45:12Z" | Entry not found |
ychordia/llama-2-7b-miniguanaco | ychordia | "2023-12-13T20:49:33Z" | 0 | 0 | peft | [
"peft",
"pytorch",
"llama",
"region:us"
] | null | "2023-12-13T17:50:04Z" | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
|
spani/ArchDornan | spani | "2023-12-13T17:50:30Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2023-12-13T17:50:16Z" | ---
license: openrail
---
|
nemson/vicuna-7b-1.1 | nemson | "2023-12-17T17:11:04Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-12-13T17:51:49Z" | ---
license: llama2
---
This is a reupload |
hkivancoral/smids_5x_deit_tiny_adamax_001_fold2 | hkivancoral | "2023-12-17T04:32:43Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-small-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2023-12-13T17:55:11Z" | ---
license: apache-2.0
base_model: facebook/deit-small-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: smids_5x_deit_tiny_adamax_001_fold2
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8968386023294509
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_5x_deit_tiny_adamax_001_fold2
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8883
- Accuracy: 0.8968
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.3644 | 1.0 | 375 | 0.3398 | 0.8702 |
| 0.2716 | 2.0 | 750 | 0.3172 | 0.8735 |
| 0.3497 | 3.0 | 1125 | 0.3400 | 0.8586 |
| 0.1669 | 4.0 | 1500 | 0.3794 | 0.8669 |
| 0.2114 | 5.0 | 1875 | 0.2911 | 0.8902 |
| 0.1067 | 6.0 | 2250 | 0.4133 | 0.8752 |
| 0.1489 | 7.0 | 2625 | 0.5329 | 0.8419 |
| 0.1233 | 8.0 | 3000 | 0.4750 | 0.8769 |
| 0.121 | 9.0 | 3375 | 0.4209 | 0.8852 |
| 0.0613 | 10.0 | 3750 | 0.3960 | 0.8918 |
| 0.0185 | 11.0 | 4125 | 0.5647 | 0.8769 |
| 0.07 | 12.0 | 4500 | 0.5185 | 0.8586 |
| 0.0467 | 13.0 | 4875 | 0.5032 | 0.8985 |
| 0.0041 | 14.0 | 5250 | 0.5742 | 0.8918 |
| 0.0599 | 15.0 | 5625 | 0.7221 | 0.8652 |
| 0.0363 | 16.0 | 6000 | 0.6853 | 0.8852 |
| 0.0212 | 17.0 | 6375 | 0.5687 | 0.8985 |
| 0.0007 | 18.0 | 6750 | 0.6790 | 0.8702 |
| 0.0025 | 19.0 | 7125 | 0.5146 | 0.8935 |
| 0.0511 | 20.0 | 7500 | 0.4949 | 0.9052 |
| 0.0231 | 21.0 | 7875 | 0.5535 | 0.8952 |
| 0.0 | 22.0 | 8250 | 0.7099 | 0.9002 |
| 0.011 | 23.0 | 8625 | 0.7090 | 0.8902 |
| 0.0118 | 24.0 | 9000 | 0.7009 | 0.9068 |
| 0.0 | 25.0 | 9375 | 0.6598 | 0.8985 |
| 0.0089 | 26.0 | 9750 | 0.7133 | 0.8902 |
| 0.0142 | 27.0 | 10125 | 0.5886 | 0.9052 |
| 0.0 | 28.0 | 10500 | 0.6881 | 0.9018 |
| 0.0001 | 29.0 | 10875 | 0.7679 | 0.8985 |
| 0.0001 | 30.0 | 11250 | 0.7339 | 0.8968 |
| 0.0038 | 31.0 | 11625 | 0.8413 | 0.8918 |
| 0.0044 | 32.0 | 12000 | 0.7669 | 0.9035 |
| 0.0049 | 33.0 | 12375 | 0.7980 | 0.9052 |
| 0.0 | 34.0 | 12750 | 0.7835 | 0.9035 |
| 0.0 | 35.0 | 13125 | 0.8137 | 0.8968 |
| 0.0 | 36.0 | 13500 | 0.8434 | 0.8968 |
| 0.0 | 37.0 | 13875 | 0.8282 | 0.8952 |
| 0.0 | 38.0 | 14250 | 0.8297 | 0.8968 |
| 0.0 | 39.0 | 14625 | 0.8386 | 0.8935 |
| 0.0034 | 40.0 | 15000 | 0.8364 | 0.8952 |
| 0.0 | 41.0 | 15375 | 0.8624 | 0.8985 |
| 0.0031 | 42.0 | 15750 | 0.8414 | 0.8968 |
| 0.0026 | 43.0 | 16125 | 0.9010 | 0.8902 |
| 0.0026 | 44.0 | 16500 | 0.8826 | 0.8952 |
| 0.0029 | 45.0 | 16875 | 0.8702 | 0.8968 |
| 0.0 | 46.0 | 17250 | 0.8727 | 0.8968 |
| 0.0055 | 47.0 | 17625 | 0.8804 | 0.8968 |
| 0.0 | 48.0 | 18000 | 0.8849 | 0.8968 |
| 0.0025 | 49.0 | 18375 | 0.8877 | 0.8968 |
| 0.0023 | 50.0 | 18750 | 0.8883 | 0.8968 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.1+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
trng1305/layoutlmv2-sroie-finetune | trng1305 | "2023-12-13T18:58:24Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"layoutlm",
"endpoints_compatible",
"region:us"
] | null | "2023-12-13T17:56:00Z" | Entry not found |
Henoka/swin-base-patch4-window7-224-finetuned-lora-scenes | Henoka | "2023-12-13T18:47:52Z" | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"arxiv:1910.09700",
"base_model:microsoft/swin-base-patch4-window7-224",
"region:us"
] | null | "2023-12-13T17:56:29Z" | ---
library_name: peft
base_model: microsoft/swin-base-patch4-window7-224
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1 |
dwang-LI/segformer-b0-finetuned-cityscapes-outputs | dwang-LI | "2023-12-13T18:04:28Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-12-13T18:04:28Z" | Entry not found |
Shiro836/RVC-Forsen | Shiro836 | "2023-12-15T18:41:11Z" | 0 | 0 | null | [
"forsen",
"RVC",
"autism",
"ayaya",
"license:mit",
"region:us"
] | null | "2023-12-13T18:06:54Z" | ---
license: mit
tags:
- forsen
- RVC
- autism
- ayaya
--- |
ADISH007/Aws_donut_10k_incremental_1_Epoch_12 | ADISH007 | "2023-12-13T18:07:21Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"vision-encoder-decoder",
"endpoints_compatible",
"region:us"
] | null | "2023-12-13T18:07:01Z" | Entry not found |
sdadasfgdfgfdg/pacoca_turma_do_dudao_LuanKCT | sdadasfgdfgfdg | "2023-12-13T18:11:21Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2023-12-13T18:09:54Z" | ---
license: openrail
---
|
saikub/xslds-nsfw | saikub | "2023-12-13T18:10:26Z" | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | "2023-12-13T18:10:25Z" | ---
license: mit
---
|
fshala/segformer_outputs | fshala | "2023-12-13T18:11:59Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-12-13T18:11:59Z" | Entry not found |
voxtell/voxtell | voxtell | "2023-12-13T18:15:07Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-12-13T18:15:07Z" | Entry not found |
GiusCat/tiffusion-mars-256 | GiusCat | "2023-12-15T21:31:57Z" | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"diffusers:DDPMPipeline",
"region:us"
] | null | "2023-12-13T18:15:10Z" | Entry not found |
hkivancoral/smids_3x_beit_base_sgd_0001_fold3 | hkivancoral | "2023-12-13T19:04:30Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"beit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/beit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2023-12-13T18:16:52Z" | ---
license: apache-2.0
base_model: microsoft/beit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: smids_3x_beit_base_sgd_0001_fold3
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.7866666666666666
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_3x_beit_base_sgd_0001_fold3
This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5470
- Accuracy: 0.7867
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.2271 | 1.0 | 225 | 1.2660 | 0.34 |
| 1.1764 | 2.0 | 450 | 1.2039 | 0.36 |
| 1.0866 | 3.0 | 675 | 1.1482 | 0.3833 |
| 1.02 | 4.0 | 900 | 1.0954 | 0.41 |
| 0.9521 | 5.0 | 1125 | 1.0436 | 0.4433 |
| 0.9373 | 6.0 | 1350 | 0.9954 | 0.485 |
| 0.8962 | 7.0 | 1575 | 0.9512 | 0.5317 |
| 0.8694 | 8.0 | 1800 | 0.9106 | 0.5767 |
| 0.8253 | 9.0 | 2025 | 0.8739 | 0.5967 |
| 0.8297 | 10.0 | 2250 | 0.8416 | 0.635 |
| 0.8158 | 11.0 | 2475 | 0.8130 | 0.6633 |
| 0.75 | 12.0 | 2700 | 0.7869 | 0.685 |
| 0.7851 | 13.0 | 2925 | 0.7633 | 0.69 |
| 0.761 | 14.0 | 3150 | 0.7425 | 0.7017 |
| 0.6927 | 15.0 | 3375 | 0.7233 | 0.7117 |
| 0.7078 | 16.0 | 3600 | 0.7069 | 0.7217 |
| 0.698 | 17.0 | 3825 | 0.6913 | 0.7283 |
| 0.6847 | 18.0 | 4050 | 0.6778 | 0.7367 |
| 0.6863 | 19.0 | 4275 | 0.6656 | 0.7383 |
| 0.6396 | 20.0 | 4500 | 0.6548 | 0.7417 |
| 0.6511 | 21.0 | 4725 | 0.6448 | 0.745 |
| 0.6297 | 22.0 | 4950 | 0.6350 | 0.7517 |
| 0.6013 | 23.0 | 5175 | 0.6267 | 0.755 |
| 0.635 | 24.0 | 5400 | 0.6187 | 0.76 |
| 0.6174 | 25.0 | 5625 | 0.6116 | 0.7583 |
| 0.6201 | 26.0 | 5850 | 0.6053 | 0.7617 |
| 0.5888 | 27.0 | 6075 | 0.5991 | 0.7617 |
| 0.5833 | 28.0 | 6300 | 0.5934 | 0.7633 |
| 0.6387 | 29.0 | 6525 | 0.5887 | 0.7683 |
| 0.5339 | 30.0 | 6750 | 0.5839 | 0.7717 |
| 0.5756 | 31.0 | 6975 | 0.5797 | 0.7767 |
| 0.6386 | 32.0 | 7200 | 0.5758 | 0.775 |
| 0.6245 | 33.0 | 7425 | 0.5722 | 0.775 |
| 0.5779 | 34.0 | 7650 | 0.5690 | 0.7767 |
| 0.57 | 35.0 | 7875 | 0.5661 | 0.7767 |
| 0.5776 | 36.0 | 8100 | 0.5632 | 0.7767 |
| 0.5861 | 37.0 | 8325 | 0.5611 | 0.7767 |
| 0.5518 | 38.0 | 8550 | 0.5586 | 0.7767 |
| 0.604 | 39.0 | 8775 | 0.5567 | 0.7817 |
| 0.539 | 40.0 | 9000 | 0.5549 | 0.7833 |
| 0.5457 | 41.0 | 9225 | 0.5534 | 0.7833 |
| 0.6155 | 42.0 | 9450 | 0.5518 | 0.785 |
| 0.5379 | 43.0 | 9675 | 0.5506 | 0.785 |
| 0.5848 | 44.0 | 9900 | 0.5496 | 0.7867 |
| 0.5814 | 45.0 | 10125 | 0.5488 | 0.7867 |
| 0.5255 | 46.0 | 10350 | 0.5481 | 0.7867 |
| 0.5726 | 47.0 | 10575 | 0.5476 | 0.7867 |
| 0.5762 | 48.0 | 10800 | 0.5473 | 0.7867 |
| 0.6192 | 49.0 | 11025 | 0.5471 | 0.7867 |
| 0.5747 | 50.0 | 11250 | 0.5470 | 0.7867 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
platzi/platzi-vit-model-daniel-sanchez | platzi | "2023-12-13T18:22:03Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:beans",
"base_model:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2023-12-13T18:17:50Z" | ---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
datasets:
- beans
metrics:
- accuracy
model-index:
- name: platzi-vit-model-daniel-sanchez
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: beans
type: beans
config: default
split: validation
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9924812030075187
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# platzi-vit-model-daniel-sanchez
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0427
- Accuracy: 0.9925
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1488 | 3.85 | 500 | 0.0427 | 0.9925 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
Lol20232022/WizardLM-7B-uncensored-GPTQ | Lol20232022 | "2023-12-13T18:19:30Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-12-13T18:19:30Z" | Entry not found |
hkivancoral/smids_3x_beit_base_rms_001_fold3 | hkivancoral | "2023-12-13T19:08:02Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"beit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/beit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2023-12-13T18:19:49Z" | ---
license: apache-2.0
base_model: microsoft/beit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: smids_3x_beit_base_rms_001_fold3
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.7616666666666667
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_3x_beit_base_rms_001_fold3
This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6251
- Accuracy: 0.7617
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.9148 | 1.0 | 225 | 0.9238 | 0.505 |
| 0.8709 | 2.0 | 450 | 0.9060 | 0.515 |
| 0.8398 | 3.0 | 675 | 0.8688 | 0.5317 |
| 0.74 | 4.0 | 900 | 0.7859 | 0.5617 |
| 0.7787 | 5.0 | 1125 | 0.7847 | 0.6017 |
| 0.7532 | 6.0 | 1350 | 0.7702 | 0.63 |
| 0.7432 | 7.0 | 1575 | 0.7450 | 0.655 |
| 0.7264 | 8.0 | 1800 | 0.7610 | 0.6317 |
| 0.7321 | 9.0 | 2025 | 0.7293 | 0.655 |
| 0.6592 | 10.0 | 2250 | 0.7888 | 0.6367 |
| 0.7528 | 11.0 | 2475 | 0.7158 | 0.6633 |
| 0.7282 | 12.0 | 2700 | 0.7365 | 0.64 |
| 0.6884 | 13.0 | 2925 | 0.6939 | 0.6733 |
| 0.6852 | 14.0 | 3150 | 0.7006 | 0.67 |
| 0.6011 | 15.0 | 3375 | 0.7591 | 0.6233 |
| 0.6904 | 16.0 | 3600 | 0.6846 | 0.6717 |
| 0.6393 | 17.0 | 3825 | 0.6741 | 0.7117 |
| 0.6772 | 18.0 | 4050 | 0.6655 | 0.6683 |
| 0.6409 | 19.0 | 4275 | 0.6658 | 0.6933 |
| 0.5941 | 20.0 | 4500 | 0.6429 | 0.7017 |
| 0.5753 | 21.0 | 4725 | 0.6753 | 0.6833 |
| 0.5975 | 22.0 | 4950 | 0.6543 | 0.6917 |
| 0.5954 | 23.0 | 5175 | 0.6358 | 0.7233 |
| 0.5729 | 24.0 | 5400 | 0.6341 | 0.7133 |
| 0.6313 | 25.0 | 5625 | 0.6336 | 0.7033 |
| 0.5938 | 26.0 | 5850 | 0.6447 | 0.7083 |
| 0.5183 | 27.0 | 6075 | 0.6247 | 0.7233 |
| 0.5713 | 28.0 | 6300 | 0.6145 | 0.73 |
| 0.5948 | 29.0 | 6525 | 0.5934 | 0.7317 |
| 0.5273 | 30.0 | 6750 | 0.5971 | 0.7367 |
| 0.5431 | 31.0 | 6975 | 0.5930 | 0.7433 |
| 0.6025 | 32.0 | 7200 | 0.6434 | 0.7183 |
| 0.5898 | 33.0 | 7425 | 0.5982 | 0.7383 |
| 0.5455 | 34.0 | 7650 | 0.5983 | 0.75 |
| 0.4857 | 35.0 | 7875 | 0.6162 | 0.735 |
| 0.5822 | 36.0 | 8100 | 0.5546 | 0.7517 |
| 0.4869 | 37.0 | 8325 | 0.5748 | 0.745 |
| 0.4722 | 38.0 | 8550 | 0.5753 | 0.7417 |
| 0.4982 | 39.0 | 8775 | 0.5694 | 0.7483 |
| 0.4478 | 40.0 | 9000 | 0.5912 | 0.74 |
| 0.4295 | 41.0 | 9225 | 0.5914 | 0.75 |
| 0.4581 | 42.0 | 9450 | 0.5846 | 0.7617 |
| 0.3797 | 43.0 | 9675 | 0.5733 | 0.7667 |
| 0.4086 | 44.0 | 9900 | 0.6072 | 0.7517 |
| 0.4164 | 45.0 | 10125 | 0.6033 | 0.7583 |
| 0.3774 | 46.0 | 10350 | 0.6024 | 0.75 |
| 0.392 | 47.0 | 10575 | 0.5976 | 0.7617 |
| 0.3586 | 48.0 | 10800 | 0.6199 | 0.76 |
| 0.3854 | 49.0 | 11025 | 0.6198 | 0.7667 |
| 0.3586 | 50.0 | 11250 | 0.6251 | 0.7617 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
Efendi/bert-base-banking77-pt2 | Efendi | "2023-12-13T18:21:03Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-12-13T18:21:03Z" | Entry not found |
bochu/vae | bochu | "2024-06-05T14:35:41Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-12-13T18:31:22Z" | Entry not found |
irfansss/lt_svm_model | irfansss | "2023-12-13T18:32:15Z" | 0 | 0 | null | [
"license:c-uda",
"region:us"
] | null | "2023-12-13T18:31:33Z" | ---
license: c-uda
---
|
sanghyo/FinalAssginment_model2 | sanghyo | "2023-12-13T18:34:09Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-12-13T18:34:09Z" | Entry not found |
johnpaulbin/TorchMoji | johnpaulbin | "2023-12-13T18:35:01Z" | 0 | 0 | null | [
"pytorch",
"region:us"
] | null | "2023-12-13T18:34:27Z" | Entry not found |
markmongie/HanSoloEpVII | markmongie | "2023-12-14T17:07:59Z" | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | "2023-12-13T18:36:04Z" | ---
license: mit
---
|
lawinsider/jina-embeddings-v2-small-en-quantized-arm64 | lawinsider | "2023-12-13T18:36:57Z" | 0 | 0 | transformers | [
"transformers",
"onnx",
"bert",
"fill-mask",
"custom_code",
"autotrain_compatible",
"region:us"
] | fill-mask | "2023-12-13T18:36:52Z" | Entry not found |
lawinsider/jina-embeddings-v2-small-en-quantized-avx2 | lawinsider | "2023-12-13T18:37:25Z" | 0 | 0 | transformers | [
"transformers",
"onnx",
"bert",
"fill-mask",
"custom_code",
"autotrain_compatible",
"region:us"
] | fill-mask | "2023-12-13T18:37:20Z" | Entry not found |
lawinsider/jina-embeddings-v2-small-en-quantized-avx512_vnni | lawinsider | "2023-12-13T18:37:37Z" | 0 | 0 | transformers | [
"transformers",
"onnx",
"bert",
"fill-mask",
"custom_code",
"autotrain_compatible",
"region:us"
] | fill-mask | "2023-12-13T18:37:33Z" | Entry not found |
lawinsider/jina-embeddings-v2-small-en | lawinsider | "2023-12-13T18:38:21Z" | 0 | 0 | transformers | [
"transformers",
"onnx",
"bert",
"fill-mask",
"custom_code",
"autotrain_compatible",
"region:us"
] | fill-mask | "2023-12-13T18:38:07Z" | Entry not found |
MatteoWood/llama-sexism-classifier | MatteoWood | "2023-12-13T18:42:59Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-12-13T18:42:59Z" | Entry not found |
otavinshow/karlvoz | otavinshow | "2023-12-13T18:45:22Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2023-12-13T18:44:35Z" | ---
license: openrail
---
|
fshala/1 | fshala | "2023-12-13T18:49:55Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-12-13T18:49:55Z" | Entry not found |
star23/baller8 | star23 | "2023-12-13T18:59:13Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-12-13T18:52:37Z" | Invalid username or password. |
Aksy/mistral-7b-chatbot | Aksy | "2023-12-13T18:57:22Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-12-13T18:57:22Z" | Entry not found |
gdurkin/segformer-b0-tiled-floods-S2-bri_grn_wet_pixel_values-Dec12-v2 | gdurkin | "2023-12-13T18:58:05Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"segformer",
"endpoints_compatible",
"region:us"
] | null | "2023-12-13T18:58:00Z" | Entry not found |
hkivancoral/smids_5x_deit_tiny_adamax_001_fold3 | hkivancoral | "2023-12-17T06:21:51Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-small-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2023-12-13T18:58:46Z" | ---
license: apache-2.0
base_model: facebook/deit-small-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: smids_5x_deit_tiny_adamax_001_fold3
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.905
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_5x_deit_tiny_adamax_001_fold3
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0439
- Accuracy: 0.905
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.3308 | 1.0 | 375 | 0.3353 | 0.875 |
| 0.2337 | 2.0 | 750 | 0.3320 | 0.8817 |
| 0.1696 | 3.0 | 1125 | 0.3479 | 0.8783 |
| 0.1669 | 4.0 | 1500 | 0.3755 | 0.8767 |
| 0.1864 | 5.0 | 1875 | 0.3099 | 0.8983 |
| 0.1212 | 6.0 | 2250 | 0.3912 | 0.91 |
| 0.1119 | 7.0 | 2625 | 0.4167 | 0.8817 |
| 0.1024 | 8.0 | 3000 | 0.4153 | 0.8733 |
| 0.0484 | 9.0 | 3375 | 0.5188 | 0.8733 |
| 0.0551 | 10.0 | 3750 | 0.6042 | 0.885 |
| 0.0468 | 11.0 | 4125 | 0.6570 | 0.8767 |
| 0.0137 | 12.0 | 4500 | 0.6069 | 0.8733 |
| 0.0177 | 13.0 | 4875 | 0.7091 | 0.8817 |
| 0.0201 | 14.0 | 5250 | 0.7010 | 0.89 |
| 0.0183 | 15.0 | 5625 | 0.6654 | 0.8867 |
| 0.0149 | 16.0 | 6000 | 0.7079 | 0.8883 |
| 0.0065 | 17.0 | 6375 | 0.6100 | 0.8933 |
| 0.001 | 18.0 | 6750 | 0.9491 | 0.8817 |
| 0.0034 | 19.0 | 7125 | 0.8269 | 0.8833 |
| 0.0213 | 20.0 | 7500 | 0.8028 | 0.8833 |
| 0.0137 | 21.0 | 7875 | 0.7227 | 0.8933 |
| 0.0 | 22.0 | 8250 | 0.8796 | 0.8917 |
| 0.0014 | 23.0 | 8625 | 0.8924 | 0.8733 |
| 0.0002 | 24.0 | 9000 | 0.6942 | 0.8917 |
| 0.0 | 25.0 | 9375 | 0.7445 | 0.89 |
| 0.0 | 26.0 | 9750 | 0.7840 | 0.885 |
| 0.0103 | 27.0 | 10125 | 0.7469 | 0.9033 |
| 0.0 | 28.0 | 10500 | 0.8867 | 0.8783 |
| 0.0 | 29.0 | 10875 | 0.8617 | 0.8867 |
| 0.003 | 30.0 | 11250 | 0.8295 | 0.8983 |
| 0.0008 | 31.0 | 11625 | 0.9061 | 0.895 |
| 0.0 | 32.0 | 12000 | 0.8630 | 0.8967 |
| 0.0 | 33.0 | 12375 | 0.8010 | 0.9017 |
| 0.0 | 34.0 | 12750 | 0.8248 | 0.8983 |
| 0.0 | 35.0 | 13125 | 0.8438 | 0.91 |
| 0.0 | 36.0 | 13500 | 0.9235 | 0.9 |
| 0.0 | 37.0 | 13875 | 0.8167 | 0.9083 |
| 0.0 | 38.0 | 14250 | 0.8531 | 0.9033 |
| 0.0 | 39.0 | 14625 | 0.9035 | 0.9067 |
| 0.0027 | 40.0 | 15000 | 0.9614 | 0.9017 |
| 0.0 | 41.0 | 15375 | 0.9740 | 0.9017 |
| 0.0 | 42.0 | 15750 | 0.9907 | 0.9 |
| 0.0 | 43.0 | 16125 | 0.9964 | 0.9033 |
| 0.0 | 44.0 | 16500 | 1.0084 | 0.9033 |
| 0.0 | 45.0 | 16875 | 1.0215 | 0.905 |
| 0.0 | 46.0 | 17250 | 1.0234 | 0.9017 |
| 0.0 | 47.0 | 17625 | 1.0315 | 0.905 |
| 0.0 | 48.0 | 18000 | 1.0372 | 0.905 |
| 0.0 | 49.0 | 18375 | 1.0413 | 0.905 |
| 0.0 | 50.0 | 18750 | 1.0439 | 0.905 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.1+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
trng1305/layoutlmv2-sroie-finetunev1 | trng1305 | "2023-12-13T20:01:39Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"layoutlm",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | "2023-12-13T19:01:00Z" | ---
tags:
- generated_from_trainer
model-index:
- name: layoutlmv2-sroie-finetunev1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlmv2-sroie-finetunev1
This model is a fine-tuned version of [microsoft/layoutlm-base-uncased](https://huggingface.co/microsoft/layoutlm-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1271
- Address: {'precision': 0.9885554425228891, 'recall': 0.9948809828512926, 'f1': 0.9917081260364842, 'number': 3907}
- Company: {'precision': 0.974934036939314, 'recall': 0.9912810194500336, 'f1': 0.9830395743265714, 'number': 1491}
- Date: {'precision': 0.9952830188679245, 'recall': 0.985981308411215, 'f1': 0.9906103286384976, 'number': 428}
- Total: {'precision': 0.8826666666666667, 'recall': 0.8921832884097035, 'f1': 0.8873994638069707, 'number': 371}
- Overall Precision: 0.9794
- Overall Recall: 0.9873
- Overall F1: 0.9833
- Overall Accuracy: 0.9949
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- label_smoothing_factor: 0.02
### Training results
| Training Loss | Epoch | Step | Validation Loss | Address | Company | Date | Total | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 0.2409 | 1.0 | 40 | 0.1537 | {'precision': 0.9862804878048781, 'recall': 0.9936012285641157, 'f1': 0.9899273237281654, 'number': 3907} | {'precision': 0.908923076923077, 'recall': 0.9906103286384976, 'f1': 0.94801026957638, 'number': 1491} | {'precision': 0.9414414414414415, 'recall': 0.9766355140186916, 'f1': 0.9587155963302753, 'number': 428} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 371} | 0.9620 | 0.9322 | 0.9469 | 0.9843 |
| 0.1402 | 2.0 | 80 | 0.1343 | {'precision': 0.9860476915271436, 'recall': 0.9948809828512926, 'f1': 0.9904446426296343, 'number': 3907} | {'precision': 0.946257197696737, 'recall': 0.9919517102615694, 'f1': 0.9685658153241651, 'number': 1491} | {'precision': 0.9813519813519813, 'recall': 0.9836448598130841, 'f1': 0.9824970828471412, 'number': 428} | {'precision': 0.6899038461538461, 'recall': 0.7735849056603774, 'f1': 0.7293519695044473, 'number': 371} | 0.9565 | 0.9802 | 0.9682 | 0.9903 |
| 0.1259 | 3.0 | 120 | 0.1262 | {'precision': 0.9918200408997955, 'recall': 0.9930893268492449, 'f1': 0.9924542780406702, 'number': 3907} | {'precision': 0.9800266311584553, 'recall': 0.9872568745808182, 'f1': 0.9836284664216505, 'number': 1491} | {'precision': 0.9928741092636579, 'recall': 0.9766355140186916, 'f1': 0.9846878680800941, 'number': 428} | {'precision': 0.819672131147541, 'recall': 0.8086253369272237, 'f1': 0.814111261872456, 'number': 371} | 0.9789 | 0.9795 | 0.9792 | 0.9937 |
| 0.1198 | 4.0 | 160 | 0.1245 | {'precision': 0.9913309535951046, 'recall': 0.9951369337087279, 'f1': 0.9932302976114447, 'number': 3907} | {'precision': 0.9774535809018567, 'recall': 0.98859825620389, 'f1': 0.9829943314438147, 'number': 1491} | {'precision': 0.997624703087886, 'recall': 0.9813084112149533, 'f1': 0.9893992932862191, 'number': 428} | {'precision': 0.7985257985257985, 'recall': 0.876010781671159, 'f1': 0.8354755784061697, 'number': 371} | 0.9759 | 0.9855 | 0.9807 | 0.9941 |
| 0.1168 | 5.0 | 200 | 0.1249 | {'precision': 0.9918242207460398, 'recall': 0.9936012285641157, 'f1': 0.9927119294207902, 'number': 3907} | {'precision': 0.9679319371727748, 'recall': 0.9919517102615694, 'f1': 0.9797946339847631, 'number': 1491} | {'precision': 0.990632318501171, 'recall': 0.9883177570093458, 'f1': 0.9894736842105264, 'number': 428} | {'precision': 0.8372093023255814, 'recall': 0.8733153638814016, 'f1': 0.8548812664907651, 'number': 371} | 0.9763 | 0.9856 | 0.9810 | 0.9943 |
| 0.1142 | 6.0 | 240 | 0.1250 | {'precision': 0.9923175416133163, 'recall': 0.991809572562068, 'f1': 0.9920634920634921, 'number': 3907} | {'precision': 0.9813581890812251, 'recall': 0.98859825620389, 'f1': 0.9849649181423321, 'number': 1491} | {'precision': 1.0, 'recall': 0.9813084112149533, 'f1': 0.9905660377358491, 'number': 428} | {'precision': 0.8802228412256268, 'recall': 0.8517520215633423, 'f1': 0.8657534246575341, 'number': 371} | 0.9837 | 0.9819 | 0.9828 | 0.9948 |
| 0.113 | 7.0 | 280 | 0.1244 | {'precision': 0.9908139831589691, 'recall': 0.993857179421551, 'f1': 0.9923332481472016, 'number': 3907} | {'precision': 0.9788079470198675, 'recall': 0.9912810194500336, 'f1': 0.9850049983338888, 'number': 1491} | {'precision': 1.0, 'recall': 0.985981308411215, 'f1': 0.9929411764705882, 'number': 428} | {'precision': 0.9054441260744985, 'recall': 0.8517520215633423, 'f1': 0.8777777777777778, 'number': 371} | 0.9837 | 0.9842 | 0.9839 | 0.9952 |
| 0.112 | 8.0 | 320 | 0.1259 | {'precision': 0.988552531162554, 'recall': 0.9946250319938572, 'f1': 0.9915794845623884, 'number': 3907} | {'precision': 0.9730617608409987, 'recall': 0.9932930918846412, 'f1': 0.9830733488217724, 'number': 1491} | {'precision': 1.0, 'recall': 0.9836448598130841, 'f1': 0.9917550058892814, 'number': 428} | {'precision': 0.8663101604278075, 'recall': 0.8733153638814016, 'f1': 0.8697986577181209, 'number': 371} | 0.9782 | 0.9863 | 0.9822 | 0.9946 |
| 0.1105 | 9.0 | 360 | 0.1262 | {'precision': 0.9880559085133418, 'recall': 0.9951369337087279, 'f1': 0.991583779648049, 'number': 3907} | {'precision': 0.9788219722038385, 'recall': 0.9919517102615694, 'f1': 0.9853431045969354, 'number': 1491} | {'precision': 1.0, 'recall': 0.985981308411215, 'f1': 0.9929411764705882, 'number': 428} | {'precision': 0.8895027624309392, 'recall': 0.8679245283018868, 'f1': 0.878581173260573, 'number': 371} | 0.9809 | 0.9861 | 0.9835 | 0.9950 |
| 0.1102 | 10.0 | 400 | 0.1258 | {'precision': 0.9905346635968278, 'recall': 0.991041719989762, 'f1': 0.9907881269191403, 'number': 3907} | {'precision': 0.9710716633793557, 'recall': 0.9906103286384976, 'f1': 0.9807436918990704, 'number': 1491} | {'precision': 0.9976415094339622, 'recall': 0.9883177570093458, 'f1': 0.9929577464788731, 'number': 428} | {'precision': 0.8605263157894737, 'recall': 0.8814016172506739, 'f1': 0.8708388814913449, 'number': 371} | 0.9783 | 0.9842 | 0.9813 | 0.9945 |
| 0.1091 | 11.0 | 440 | 0.1263 | {'precision': 0.990316004077472, 'recall': 0.9946250319938572, 'f1': 0.9924658408887755, 'number': 3907} | {'precision': 0.9724409448818898, 'recall': 0.993963782696177, 'f1': 0.9830845771144279, 'number': 1491} | {'precision': 1.0, 'recall': 0.985981308411215, 'f1': 0.9929411764705882, 'number': 428} | {'precision': 0.8491048593350383, 'recall': 0.894878706199461, 'f1': 0.8713910761154856, 'number': 371} | 0.9778 | 0.9879 | 0.9828 | 0.9948 |
| 0.1092 | 12.0 | 480 | 0.1277 | {'precision': 0.9885437881873728, 'recall': 0.993857179421551, 'f1': 0.991193363114231, 'number': 3907} | {'precision': 0.965472312703583, 'recall': 0.993963782696177, 'f1': 0.9795109054857898, 'number': 1491} | {'precision': 0.9976359338061466, 'recall': 0.985981308411215, 'f1': 0.991774383078731, 'number': 428} | {'precision': 0.8907103825136612, 'recall': 0.8787061994609164, 'f1': 0.8846675712347355, 'number': 371} | 0.9778 | 0.9864 | 0.9821 | 0.9946 |
| 0.1082 | 13.0 | 520 | 0.1271 | {'precision': 0.9890501655207538, 'recall': 0.9941131302789864, 'f1': 0.9915751850906306, 'number': 3907} | {'precision': 0.9794019933554817, 'recall': 0.98859825620389, 'f1': 0.9839786381842456, 'number': 1491} | {'precision': 0.9952830188679245, 'recall': 0.985981308411215, 'f1': 0.9906103286384976, 'number': 428} | {'precision': 0.8477157360406091, 'recall': 0.9002695417789758, 'f1': 0.8732026143790849, 'number': 371} | 0.9782 | 0.9866 | 0.9824 | 0.9947 |
| 0.1079 | 14.0 | 560 | 0.1274 | {'precision': 0.9888040712468193, 'recall': 0.9946250319938572, 'f1': 0.991706009952788, 'number': 3907} | {'precision': 0.974934036939314, 'recall': 0.9912810194500336, 'f1': 0.9830395743265714, 'number': 1491} | {'precision': 0.9952830188679245, 'recall': 0.985981308411215, 'f1': 0.9906103286384976, 'number': 428} | {'precision': 0.8691099476439791, 'recall': 0.894878706199461, 'f1': 0.8818061088977424, 'number': 371} | 0.9786 | 0.9873 | 0.9829 | 0.9948 |
| 0.1076 | 15.0 | 600 | 0.1268 | {'precision': 0.9887983706720977, 'recall': 0.9941131302789864, 'f1': 0.9914486279514996, 'number': 3907} | {'precision': 0.9749009247027741, 'recall': 0.9899396378269618, 'f1': 0.9823627287853578, 'number': 1491} | {'precision': 0.9976359338061466, 'recall': 0.985981308411215, 'f1': 0.991774383078731, 'number': 428} | {'precision': 0.8840970350404312, 'recall': 0.8840970350404312, 'f1': 0.8840970350404312, 'number': 371} | 0.9798 | 0.9860 | 0.9829 | 0.9948 |
| 0.1076 | 16.0 | 640 | 0.1268 | {'precision': 0.988552531162554, 'recall': 0.9946250319938572, 'f1': 0.9915794845623884, 'number': 3907} | {'precision': 0.97556142668428, 'recall': 0.9906103286384976, 'f1': 0.9830282861896837, 'number': 1491} | {'precision': 0.9976359338061466, 'recall': 0.985981308411215, 'f1': 0.991774383078731, 'number': 428} | {'precision': 0.8934426229508197, 'recall': 0.8814016172506739, 'f1': 0.8873812754409769, 'number': 371} | 0.9804 | 0.9863 | 0.9833 | 0.9949 |
| 0.1073 | 17.0 | 680 | 0.1268 | {'precision': 0.9895541401273885, 'recall': 0.9941131302789864, 'f1': 0.9918283963227783, 'number': 3907} | {'precision': 0.974917491749175, 'recall': 0.9906103286384976, 'f1': 0.9827012641383899, 'number': 1491} | {'precision': 0.9976359338061466, 'recall': 0.985981308411215, 'f1': 0.991774383078731, 'number': 428} | {'precision': 0.8921832884097035, 'recall': 0.8921832884097035, 'f1': 0.8921832884097035, 'number': 371} | 0.9808 | 0.9866 | 0.9837 | 0.9950 |
| 0.1071 | 18.0 | 720 | 0.1265 | {'precision': 0.9895568008150789, 'recall': 0.9943690811364219, 'f1': 0.9919571045576407, 'number': 3907} | {'precision': 0.9761904761904762, 'recall': 0.9899396378269618, 'f1': 0.983016983016983, 'number': 1491} | {'precision': 0.9952830188679245, 'recall': 0.985981308411215, 'f1': 0.9906103286384976, 'number': 428} | {'precision': 0.8873994638069705, 'recall': 0.8921832884097035, 'f1': 0.8897849462365591, 'number': 371} | 0.9806 | 0.9866 | 0.9836 | 0.9950 |
| 0.1072 | 19.0 | 760 | 0.1271 | {'precision': 0.9885554425228891, 'recall': 0.9948809828512926, 'f1': 0.9917081260364842, 'number': 3907} | {'precision': 0.974934036939314, 'recall': 0.9912810194500336, 'f1': 0.9830395743265714, 'number': 1491} | {'precision': 0.9952830188679245, 'recall': 0.985981308411215, 'f1': 0.9906103286384976, 'number': 428} | {'precision': 0.8756613756613757, 'recall': 0.8921832884097035, 'f1': 0.8838451268357811, 'number': 371} | 0.9789 | 0.9873 | 0.9830 | 0.9948 |
| 0.1072 | 20.0 | 800 | 0.1271 | {'precision': 0.9885554425228891, 'recall': 0.9948809828512926, 'f1': 0.9917081260364842, 'number': 3907} | {'precision': 0.974934036939314, 'recall': 0.9912810194500336, 'f1': 0.9830395743265714, 'number': 1491} | {'precision': 0.9952830188679245, 'recall': 0.985981308411215, 'f1': 0.9906103286384976, 'number': 428} | {'precision': 0.8826666666666667, 'recall': 0.8921832884097035, 'f1': 0.8873994638069707, 'number': 371} | 0.9794 | 0.9873 | 0.9833 | 0.9949 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.13.3
|
hamzaahmedkhater25/JohnWick5_Reckoning | hamzaahmedkhater25 | "2023-12-13T19:02:24Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-12-13T19:02:24Z" | Entry not found |
sayhamza/LLM_Model_SayedHamza | sayhamza | "2023-12-13T19:04:27Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2023-12-13T19:04:27Z" | ---
license: apache-2.0
---
|
fshala/segformer-cloud | fshala | "2023-12-13T21:14:39Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"segformer",
"image-segmentation",
"vision",
"generated_from_trainer",
"base_model:nvidia/mit-b0",
"license:other",
"endpoints_compatible",
"region:us"
] | image-segmentation | "2023-12-13T19:04:57Z" | ---
license: other
base_model: nvidia/mit-b0
tags:
- image-segmentation
- vision
- generated_from_trainer
model-index:
- name: segformer-cloud
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-cloud
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the segments/sidewalk-semantic dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 1337
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 1000
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.1+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
hkivancoral/smids_3x_beit_base_sgd_0001_fold4 | hkivancoral | "2023-12-13T19:52:54Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"beit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/beit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2023-12-13T19:05:21Z" | ---
license: apache-2.0
base_model: microsoft/beit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: smids_3x_beit_base_sgd_0001_fold4
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.77
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_3x_beit_base_sgd_0001_fold4
This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5543
- Accuracy: 0.77
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.2016 | 1.0 | 225 | 1.2841 | 0.345 |
| 1.1719 | 2.0 | 450 | 1.2211 | 0.3617 |
| 1.0758 | 3.0 | 675 | 1.1630 | 0.3733 |
| 1.0147 | 4.0 | 900 | 1.1086 | 0.4033 |
| 1.0074 | 5.0 | 1125 | 1.0560 | 0.4317 |
| 0.9405 | 6.0 | 1350 | 1.0063 | 0.4617 |
| 0.9199 | 7.0 | 1575 | 0.9602 | 0.51 |
| 0.9125 | 8.0 | 1800 | 0.9177 | 0.5617 |
| 0.8654 | 9.0 | 2025 | 0.8771 | 0.6017 |
| 0.8229 | 10.0 | 2250 | 0.8432 | 0.6333 |
| 0.8209 | 11.0 | 2475 | 0.8129 | 0.6567 |
| 0.775 | 12.0 | 2700 | 0.7860 | 0.675 |
| 0.7435 | 13.0 | 2925 | 0.7620 | 0.6883 |
| 0.7034 | 14.0 | 3150 | 0.7408 | 0.695 |
| 0.7434 | 15.0 | 3375 | 0.7223 | 0.7033 |
| 0.7412 | 16.0 | 3600 | 0.7055 | 0.7133 |
| 0.6871 | 17.0 | 3825 | 0.6906 | 0.7167 |
| 0.6997 | 18.0 | 4050 | 0.6769 | 0.725 |
| 0.6998 | 19.0 | 4275 | 0.6646 | 0.7267 |
| 0.6623 | 20.0 | 4500 | 0.6540 | 0.7283 |
| 0.668 | 21.0 | 4725 | 0.6441 | 0.73 |
| 0.6697 | 22.0 | 4950 | 0.6349 | 0.7317 |
| 0.6394 | 23.0 | 5175 | 0.6268 | 0.7383 |
| 0.6267 | 24.0 | 5400 | 0.6193 | 0.7383 |
| 0.6154 | 25.0 | 5625 | 0.6125 | 0.7433 |
| 0.5813 | 26.0 | 5850 | 0.6070 | 0.745 |
| 0.612 | 27.0 | 6075 | 0.6014 | 0.7483 |
| 0.6011 | 28.0 | 6300 | 0.5964 | 0.7483 |
| 0.5913 | 29.0 | 6525 | 0.5915 | 0.7517 |
| 0.5609 | 30.0 | 6750 | 0.5872 | 0.76 |
| 0.5861 | 31.0 | 6975 | 0.5835 | 0.7617 |
| 0.5483 | 32.0 | 7200 | 0.5800 | 0.76 |
| 0.5986 | 33.0 | 7425 | 0.5766 | 0.7633 |
| 0.619 | 34.0 | 7650 | 0.5736 | 0.7617 |
| 0.5813 | 35.0 | 7875 | 0.5710 | 0.765 |
| 0.6084 | 36.0 | 8100 | 0.5683 | 0.7667 |
| 0.6052 | 37.0 | 8325 | 0.5664 | 0.765 |
| 0.5601 | 38.0 | 8550 | 0.5646 | 0.765 |
| 0.5878 | 39.0 | 8775 | 0.5631 | 0.7633 |
| 0.6072 | 40.0 | 9000 | 0.5616 | 0.7633 |
| 0.5597 | 41.0 | 9225 | 0.5601 | 0.7683 |
| 0.5694 | 42.0 | 9450 | 0.5588 | 0.7667 |
| 0.5553 | 43.0 | 9675 | 0.5575 | 0.77 |
| 0.5942 | 44.0 | 9900 | 0.5566 | 0.77 |
| 0.6005 | 45.0 | 10125 | 0.5559 | 0.77 |
| 0.58 | 46.0 | 10350 | 0.5553 | 0.77 |
| 0.5814 | 47.0 | 10575 | 0.5548 | 0.77 |
| 0.5609 | 48.0 | 10800 | 0.5545 | 0.7717 |
| 0.6076 | 49.0 | 11025 | 0.5543 | 0.77 |
| 0.5819 | 50.0 | 11250 | 0.5543 | 0.77 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
EthanRhys/Flannery-Masters-EX | EthanRhys | "2023-12-13T19:09:55Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2023-12-13T19:08:05Z" | ---
license: openrail
---
|
behzadnet/Llama-2-7b-chat-hf-sharded-bf16-fine-tuned-adapters_RandomError1.0_Seed102 | behzadnet | "2023-12-13T19:08:58Z" | 0 | 0 | peft | [
"peft",
"arxiv:1910.09700",
"base_model:Trelis/Llama-2-7b-chat-hf-sharded-bf16",
"region:us"
] | null | "2023-12-13T19:08:51Z" | ---
library_name: peft
base_model: Trelis/Llama-2-7b-chat-hf-sharded-bf16
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
|
hkivancoral/smids_3x_beit_base_rms_001_fold4 | hkivancoral | "2023-12-13T19:57:07Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"beit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/beit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2023-12-13T19:08:56Z" | ---
license: apache-2.0
base_model: microsoft/beit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: smids_3x_beit_base_rms_001_fold4
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.7583333333333333
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_3x_beit_base_rms_001_fold4
This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6701
- Accuracy: 0.7583
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.1325 | 1.0 | 225 | 1.0820 | 0.33 |
| 0.9647 | 2.0 | 450 | 0.8610 | 0.5233 |
| 0.9155 | 3.0 | 675 | 0.8470 | 0.5233 |
| 0.8045 | 4.0 | 900 | 0.7955 | 0.5633 |
| 0.9422 | 5.0 | 1125 | 0.7622 | 0.5833 |
| 0.7846 | 6.0 | 1350 | 0.7519 | 0.6167 |
| 0.7593 | 7.0 | 1575 | 0.7344 | 0.6267 |
| 0.7843 | 8.0 | 1800 | 0.7233 | 0.625 |
| 0.758 | 9.0 | 2025 | 0.6963 | 0.675 |
| 0.7521 | 10.0 | 2250 | 0.7172 | 0.6367 |
| 0.7273 | 11.0 | 2475 | 0.7162 | 0.6867 |
| 0.7253 | 12.0 | 2700 | 0.7548 | 0.6367 |
| 0.7429 | 13.0 | 2925 | 0.7073 | 0.6933 |
| 0.6572 | 14.0 | 3150 | 0.7052 | 0.6733 |
| 0.668 | 15.0 | 3375 | 0.6850 | 0.6967 |
| 0.7304 | 16.0 | 3600 | 0.6940 | 0.6633 |
| 0.6361 | 17.0 | 3825 | 0.7269 | 0.68 |
| 0.7538 | 18.0 | 4050 | 0.6743 | 0.7 |
| 0.7884 | 19.0 | 4275 | 0.6564 | 0.7067 |
| 0.6141 | 20.0 | 4500 | 0.7026 | 0.68 |
| 0.6658 | 21.0 | 4725 | 0.6553 | 0.6983 |
| 0.7013 | 22.0 | 4950 | 0.6518 | 0.7133 |
| 0.6988 | 23.0 | 5175 | 0.7048 | 0.6433 |
| 0.6506 | 24.0 | 5400 | 0.6539 | 0.725 |
| 0.6644 | 25.0 | 5625 | 0.6442 | 0.7083 |
| 0.6782 | 26.0 | 5850 | 0.6333 | 0.735 |
| 0.6752 | 27.0 | 6075 | 0.6258 | 0.72 |
| 0.7055 | 28.0 | 6300 | 0.6242 | 0.7267 |
| 0.6118 | 29.0 | 6525 | 0.6321 | 0.7333 |
| 0.6455 | 30.0 | 6750 | 0.6581 | 0.7067 |
| 0.5483 | 31.0 | 6975 | 0.6054 | 0.745 |
| 0.6021 | 32.0 | 7200 | 0.6170 | 0.7333 |
| 0.5857 | 33.0 | 7425 | 0.6206 | 0.7367 |
| 0.657 | 34.0 | 7650 | 0.6354 | 0.72 |
| 0.6083 | 35.0 | 7875 | 0.6084 | 0.7517 |
| 0.6036 | 36.0 | 8100 | 0.6122 | 0.7267 |
| 0.5986 | 37.0 | 8325 | 0.6097 | 0.7383 |
| 0.5126 | 38.0 | 8550 | 0.6043 | 0.7467 |
| 0.5361 | 39.0 | 8775 | 0.6148 | 0.7483 |
| 0.5689 | 40.0 | 9000 | 0.6233 | 0.7567 |
| 0.5001 | 41.0 | 9225 | 0.6245 | 0.7567 |
| 0.5505 | 42.0 | 9450 | 0.6430 | 0.745 |
| 0.5115 | 43.0 | 9675 | 0.6524 | 0.7333 |
| 0.5425 | 44.0 | 9900 | 0.6414 | 0.7467 |
| 0.5416 | 45.0 | 10125 | 0.6407 | 0.75 |
| 0.4698 | 46.0 | 10350 | 0.6413 | 0.7367 |
| 0.5037 | 47.0 | 10575 | 0.6665 | 0.7533 |
| 0.5074 | 48.0 | 10800 | 0.6614 | 0.7583 |
| 0.4187 | 49.0 | 11025 | 0.6632 | 0.755 |
| 0.4669 | 50.0 | 11250 | 0.6701 | 0.7583 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
behzadnet/Llama-2-7b-chat-hf-sharded-bf16-fine-tuned_RandomError1.0_Seed102 | behzadnet | "2023-12-13T19:09:07Z" | 0 | 0 | peft | [
"peft",
"arxiv:1910.09700",
"base_model:Trelis/Llama-2-7b-chat-hf-sharded-bf16",
"region:us"
] | null | "2023-12-13T19:09:04Z" | ---
library_name: peft
base_model: Trelis/Llama-2-7b-chat-hf-sharded-bf16
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
|
praiseneh/lora_model_weights_r_16_epochs_10_batch_size_4_gradient_steps_4_lr_0.001_warmup_100 | praiseneh | "2023-12-13T19:09:19Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-12-13T19:09:19Z" | Entry not found |
agailloty/houseprice | agailloty | "2023-12-13T19:09:54Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-12-13T19:09:54Z" | Entry not found |
marielandryceo/ScientificAI | marielandryceo | "2023-12-13T19:11:25Z" | 0 | 0 | null | [
"license:unknown",
"region:us"
] | null | "2023-12-13T19:11:25Z" | ---
license: unknown
---
|
Claire-codes/llama2-autoTrain-alpaca-gpt-4 | Claire-codes | "2023-12-13T19:17:05Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-12-13T19:17:05Z" | Entry not found |
praiseneh/lora_model_weights_r_16_epochs_20_batch_size_4_gradient_steps_4_lr_0.001_warmup_100 | praiseneh | "2023-12-13T19:18:21Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-12-13T19:18:21Z" | Entry not found |
StefanMachine/TeachingBud | StefanMachine | "2023-12-13T19:20:37Z" | 0 | 0 | null | [
"license:llama2",
"region:us"
] | null | "2023-12-13T19:20:37Z" | ---
license: llama2
---
|
yosthin06/whisper-tiny_yosthingalindo | yosthin06 | "2024-01-03T19:09:01Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:PolyAI/minds14",
"base_model:openai/whisper-tiny",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2023-12-13T19:22:56Z" | ---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_trainer
datasets:
- PolyAI/minds14
metrics:
- wer
model-index:
- name: whisper-tiny_yosthingalindo
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: PolyAI/minds14
type: PolyAI/minds14
metrics:
- name: Wer
type: wer
value: 0.33530106257378983
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-tiny_yosthingalindo
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the PolyAI/minds14 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5824
- Wer Ortho: 0.3424
- Wer: 0.3353
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 250
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|
| 0.3677 | 1.72 | 50 | 0.5198 | 0.3849 | 0.3648 |
| 0.1925 | 3.45 | 100 | 0.5038 | 0.3671 | 0.3518 |
| 0.0836 | 5.17 | 150 | 0.5206 | 0.3547 | 0.3406 |
| 0.0265 | 6.9 | 200 | 0.5520 | 0.3627 | 0.3518 |
| 0.008 | 8.62 | 250 | 0.5824 | 0.3424 | 0.3353 |
### Framework versions
- Transformers 4.32.0
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.13.3
|
prajwalJumde/classification_13Dec | prajwalJumde | "2023-12-13T19:25:21Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-12-13T19:25:21Z" | Entry not found |
faisaltareque/BengaliByteLevelBPETokenizerFast | faisaltareque | "2023-12-14T14:17:42Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-12-13T19:27:28Z" | Entry not found |
ThuyNT03/KLTN_COQE_viT5_MvP_v1 | ThuyNT03 | "2023-12-14T00:51:35Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:VietAI/vit5-large",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2023-12-13T19:34:54Z" | ---
license: mit
base_model: VietAI/vit5-large
tags:
- generated_from_trainer
model-index:
- name: KLTN_COQE_viT5_MvP_v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# KLTN_COQE_viT5_MvP_v1
This model is a fine-tuned version of [VietAI/vit5-large](https://huggingface.co/VietAI/vit5-large) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.14.1
|
AigizK/wav2vec2-large-mms-1b-tatar | AigizK | "2024-01-23T18:13:07Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:facebook/mms-1b-all",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2023-12-13T19:36:39Z" | ---
license: cc-by-nc-4.0
base_model: facebook/mms-1b-all
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: wav2vec2-large-mms-1b-tatar
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-mms-1b-tatar
This model is a fine-tuned version of [facebook/mms-1b-all](https://huggingface.co/facebook/mms-1b-all) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1884
- Wer: 0.1618
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 100000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:------:|:---------------:|:------:|
| 3.734 | 0.0 | 100 | 0.3737 | 0.3767 |
| 0.415 | 0.0 | 200 | 0.3114 | 0.2868 |
| 0.3977 | 0.0 | 300 | 0.2933 | 0.2682 |
| 0.374 | 0.0 | 400 | 0.3043 | 0.2897 |
| 0.37 | 0.01 | 500 | 0.3074 | 0.2717 |
| 0.3503 | 0.01 | 600 | 0.3097 | 0.2818 |
| 0.3765 | 0.01 | 700 | 0.2897 | 0.2739 |
| 0.3411 | 0.01 | 800 | 0.2865 | 0.2660 |
| 0.3448 | 0.01 | 900 | 0.2885 | 0.2509 |
| 0.3363 | 0.01 | 1000 | 0.2857 | 0.2538 |
| 0.3445 | 0.01 | 1100 | 0.2767 | 0.2451 |
| 0.2992 | 0.01 | 1200 | 0.2881 | 0.2509 |
| 0.311 | 0.01 | 1300 | 0.3517 | 0.2401 |
| 0.2884 | 0.01 | 1400 | 0.3339 | 0.2495 |
| 0.3027 | 0.01 | 1500 | 0.3485 | 0.2595 |
| 0.2891 | 0.02 | 1600 | 0.3452 | 0.2574 |
| 0.2702 | 0.02 | 1700 | 0.3474 | 0.2588 |
| 0.2754 | 0.02 | 1800 | 0.3471 | 0.2437 |
| 0.265 | 0.02 | 1900 | 0.3507 | 0.2459 |
| 0.274 | 0.02 | 2000 | 0.3546 | 0.2365 |
| 0.2792 | 0.02 | 2100 | 0.3641 | 0.2509 |
| 0.2648 | 0.02 | 2200 | 0.3623 | 0.2265 |
| 0.2668 | 0.02 | 2300 | 0.3299 | 0.2315 |
| 0.2615 | 0.02 | 2400 | 0.3750 | 0.2408 |
| 0.2774 | 0.03 | 2500 | 0.3363 | 0.2365 |
| 0.2627 | 0.03 | 2600 | 0.3280 | 0.2315 |
| 0.264 | 0.03 | 2700 | 0.3240 | 0.2315 |
| 0.2634 | 0.03 | 2800 | 0.3512 | 0.2236 |
| 0.2745 | 0.03 | 2900 | 0.3326 | 0.2265 |
| 0.2787 | 0.03 | 3000 | 0.3194 | 0.2358 |
| 0.2654 | 0.03 | 3100 | 0.3238 | 0.2322 |
| 0.2704 | 0.03 | 3200 | 0.3342 | 0.2351 |
| 0.2599 | 0.03 | 3300 | 0.3518 | 0.2387 |
| 0.2477 | 0.03 | 3400 | 0.3258 | 0.2301 |
| 0.2597 | 0.04 | 3500 | 0.3151 | 0.2344 |
| 0.2582 | 0.04 | 3600 | 0.3250 | 0.2315 |
| 0.2563 | 0.04 | 3700 | 0.3322 | 0.2344 |
| 0.269 | 0.04 | 3800 | 0.3218 | 0.2416 |
| 0.2572 | 0.04 | 3900 | 0.3196 | 0.2308 |
| 0.2683 | 0.04 | 4000 | 0.3497 | 0.2365 |
| 0.2542 | 0.04 | 4100 | 0.3290 | 0.2466 |
| 0.2545 | 0.04 | 4200 | 0.3238 | 0.2437 |
| 0.2684 | 0.04 | 4300 | 0.3131 | 0.2221 |
| 0.2518 | 0.04 | 4400 | 0.3267 | 0.2286 |
| 0.2405 | 0.04 | 4500 | 0.3354 | 0.2243 |
| 0.2657 | 0.05 | 4600 | 0.3380 | 0.2351 |
| 0.2658 | 0.05 | 4700 | 0.3372 | 0.2437 |
| 0.2497 | 0.05 | 4800 | 0.3475 | 0.2286 |
| 0.256 | 0.05 | 4900 | 0.3447 | 0.2401 |
| 0.2631 | 0.05 | 5000 | 0.2976 | 0.2416 |
| 0.2708 | 0.05 | 5100 | 0.3358 | 0.2344 |
| 0.2676 | 0.05 | 5200 | 0.3251 | 0.2394 |
| 0.2523 | 0.05 | 5300 | 0.3347 | 0.2315 |
| 0.248 | 0.05 | 5400 | 0.3308 | 0.2315 |
| 0.2284 | 0.06 | 5500 | 0.3338 | 0.2372 |
| 0.2504 | 0.06 | 5600 | 0.3475 | 0.2308 |
| 0.2531 | 0.06 | 5700 | 0.3227 | 0.2336 |
| 0.2544 | 0.06 | 5800 | 0.3184 | 0.2315 |
| 0.2537 | 0.06 | 5900 | 0.3083 | 0.2265 |
| 0.257 | 0.06 | 6000 | 0.3173 | 0.2308 |
| 0.2531 | 0.06 | 6100 | 0.3298 | 0.2322 |
| 0.2497 | 0.06 | 6200 | 0.3001 | 0.2250 |
| 0.2602 | 0.06 | 6300 | 0.3262 | 0.2301 |
| 0.2341 | 0.06 | 6400 | 0.3273 | 0.2229 |
| 0.2521 | 0.07 | 6500 | 0.3334 | 0.2286 |
| 0.2597 | 0.07 | 6600 | 0.3171 | 0.2157 |
| 0.2496 | 0.07 | 6700 | 0.3340 | 0.2243 |
| 0.2412 | 0.07 | 6800 | 0.3035 | 0.2157 |
| 0.2478 | 0.07 | 6900 | 0.3274 | 0.2171 |
| 0.2494 | 0.07 | 7000 | 0.3218 | 0.2250 |
| 0.2525 | 0.07 | 7100 | 0.3321 | 0.2265 |
| 0.2537 | 0.07 | 7200 | 0.3262 | 0.2265 |
| 0.2509 | 0.07 | 7300 | 0.3286 | 0.2322 |
| 0.2434 | 0.07 | 7400 | 0.3259 | 0.2193 |
| 0.2394 | 0.07 | 7500 | 0.3303 | 0.2185 |
| 0.2527 | 0.08 | 7600 | 0.3237 | 0.2229 |
| 0.3968 | 0.08 | 7700 | 0.2598 | 0.2315 |
| 0.3999 | 0.08 | 7800 | 0.2541 | 0.2272 |
| 0.3684 | 0.08 | 7900 | 0.2858 | 0.2351 |
| 0.3622 | 0.08 | 8000 | 0.2591 | 0.2214 |
| 0.3823 | 0.08 | 8100 | 0.2529 | 0.2250 |
| 0.3638 | 0.08 | 8200 | 0.2598 | 0.2301 |
| 0.4044 | 0.08 | 8300 | 0.2586 | 0.2214 |
| 0.3785 | 0.08 | 8400 | 0.2464 | 0.2265 |
| 0.3412 | 0.09 | 8500 | 0.2611 | 0.2214 |
| 0.3626 | 0.09 | 8600 | 0.2428 | 0.2193 |
| 0.3571 | 0.09 | 8700 | 0.2369 | 0.2121 |
| 0.3877 | 0.09 | 8800 | 0.2461 | 0.2243 |
| 0.3797 | 0.09 | 8900 | 0.2574 | 0.2272 |
| 0.3454 | 0.09 | 9000 | 0.2653 | 0.2135 |
| 0.347 | 0.09 | 9100 | 0.2584 | 0.2265 |
| 0.3592 | 0.09 | 9200 | 0.2495 | 0.2250 |
| 0.3536 | 0.09 | 9300 | 0.2490 | 0.2207 |
| 0.3655 | 0.09 | 9400 | 0.2447 | 0.2214 |
| 0.3668 | 0.1 | 9500 | 0.2423 | 0.2078 |
| 0.3491 | 0.1 | 9600 | 0.2456 | 0.2164 |
| 0.3442 | 0.1 | 9700 | 0.2411 | 0.2027 |
| 0.372 | 0.1 | 9800 | 0.2618 | 0.2193 |
| 0.3345 | 0.1 | 9900 | 0.2500 | 0.2272 |
| 0.3628 | 0.1 | 10000 | 0.2438 | 0.2128 |
| 0.3781 | 0.1 | 10100 | 0.2546 | 0.2200 |
| 0.3478 | 0.1 | 10200 | 0.2553 | 0.2157 |
| 0.3461 | 0.1 | 10300 | 0.2543 | 0.2128 |
| 0.35 | 0.1 | 10400 | 0.2418 | 0.2121 |
| 0.3557 | 0.1 | 10500 | 0.2628 | 0.2207 |
| 0.3384 | 0.11 | 10600 | 0.2396 | 0.2171 |
| 0.3373 | 0.11 | 10700 | 0.2582 | 0.2200 |
| 0.3596 | 0.11 | 10800 | 0.2554 | 0.2092 |
| 0.3218 | 0.11 | 10900 | 0.2389 | 0.2178 |
| 0.3532 | 0.11 | 11000 | 0.2454 | 0.2279 |
| 0.3661 | 0.11 | 11100 | 0.2455 | 0.2250 |
| 0.362 | 0.11 | 11200 | 0.2461 | 0.2193 |
| 0.3465 | 0.11 | 11300 | 0.2437 | 0.2243 |
| 0.3345 | 0.11 | 11400 | 0.2427 | 0.2085 |
| 0.3691 | 0.12 | 11500 | 0.2488 | 0.2221 |
| 0.3386 | 0.12 | 11600 | 0.2476 | 0.2135 |
| 0.3425 | 0.12 | 11700 | 0.2449 | 0.2243 |
| 0.345 | 0.12 | 11800 | 0.2480 | 0.2135 |
| 0.3426 | 0.12 | 11900 | 0.2587 | 0.2272 |
| 0.3234 | 0.12 | 12000 | 0.2393 | 0.2200 |
| 0.3402 | 0.12 | 12100 | 0.2471 | 0.2185 |
| 0.3225 | 0.12 | 12200 | 0.2551 | 0.2157 |
| 0.3503 | 0.12 | 12300 | 0.2539 | 0.2243 |
| 0.3396 | 0.12 | 12400 | 0.2596 | 0.2236 |
| 0.3182 | 0.12 | 12500 | 0.2646 | 0.2279 |
| 0.3281 | 0.13 | 12600 | 0.2660 | 0.2229 |
| 0.3444 | 0.13 | 12700 | 0.2469 | 0.2128 |
| 0.3323 | 0.13 | 12800 | 0.2526 | 0.2178 |
| 0.3248 | 0.13 | 12900 | 0.2558 | 0.2157 |
| 0.3317 | 0.13 | 13000 | 0.2454 | 0.2157 |
| 0.3441 | 1.0 | 13100 | 0.2380 | 0.2243 |
| 0.3359 | 1.0 | 13200 | 0.2251 | 0.2157 |
| 0.3413 | 1.0 | 13300 | 0.2310 | 0.2142 |
| 0.3283 | 1.0 | 13400 | 0.2275 | 0.2193 |
| 0.3171 | 1.0 | 13500 | 0.2290 | 0.2236 |
| 0.3153 | 1.01 | 13600 | 0.2341 | 0.2229 |
| 0.3143 | 1.01 | 13700 | 0.2358 | 0.2243 |
| 0.3401 | 1.01 | 13800 | 0.2374 | 0.2157 |
| 0.2979 | 1.01 | 13900 | 0.2335 | 0.2207 |
| 0.3075 | 1.01 | 14000 | 0.2288 | 0.2221 |
| 0.308 | 1.01 | 14100 | 0.2354 | 0.2221 |
| 0.3272 | 1.01 | 14200 | 0.2339 | 0.2214 |
| 0.2748 | 1.01 | 14300 | 0.2411 | 0.2286 |
| 0.258 | 1.01 | 14400 | 0.3018 | 0.2121 |
| 0.2607 | 1.01 | 14500 | 0.2944 | 0.2142 |
| 0.2526 | 1.02 | 14600 | 0.3000 | 0.2178 |
| 0.2522 | 1.02 | 14700 | 0.2988 | 0.2185 |
| 0.2374 | 1.02 | 14800 | 0.2888 | 0.2150 |
| 0.253 | 1.02 | 14900 | 0.2888 | 0.2135 |
| 0.2349 | 1.02 | 15000 | 0.3067 | 0.2106 |
| 0.2511 | 1.02 | 15100 | 0.2910 | 0.2128 |
| 0.2428 | 1.02 | 15200 | 0.2937 | 0.2114 |
| 0.2262 | 1.02 | 15300 | 0.3026 | 0.2193 |
| 0.2467 | 1.02 | 15400 | 0.2996 | 0.2221 |
| 0.2243 | 1.02 | 15500 | 0.3104 | 0.2049 |
| 0.2423 | 1.03 | 15600 | 0.2798 | 0.2114 |
| 0.2339 | 1.03 | 15700 | 0.2699 | 0.2128 |
| 0.2448 | 1.03 | 15800 | 0.3051 | 0.2106 |
| 0.2373 | 1.03 | 15900 | 0.3193 | 0.2135 |
| 0.249 | 1.03 | 16000 | 0.2992 | 0.2085 |
| 0.2473 | 1.03 | 16100 | 0.2982 | 0.2135 |
| 0.2427 | 1.03 | 16200 | 0.3118 | 0.2150 |
| 0.2439 | 1.03 | 16300 | 0.3238 | 0.2106 |
| 0.2317 | 1.03 | 16400 | 0.3075 | 0.2092 |
| 0.2257 | 1.03 | 16500 | 0.3110 | 0.2243 |
| 0.2418 | 1.04 | 16600 | 0.3005 | 0.2150 |
| 0.2264 | 1.04 | 16700 | 0.2978 | 0.2200 |
| 0.2389 | 1.04 | 16800 | 0.3078 | 0.2035 |
| 0.2457 | 1.04 | 16900 | 0.3227 | 0.2142 |
| 0.2479 | 1.04 | 17000 | 0.2922 | 0.2106 |
| 0.242 | 1.04 | 17100 | 0.2943 | 0.2099 |
| 0.2218 | 1.04 | 17200 | 0.3123 | 0.2099 |
| 0.2442 | 1.04 | 17300 | 0.3217 | 0.2157 |
| 0.2467 | 1.04 | 17400 | 0.3133 | 0.2078 |
| 0.2296 | 1.04 | 17500 | 0.3113 | 0.2128 |
| 0.2272 | 1.05 | 17600 | 0.3082 | 0.2085 |
| 0.2462 | 1.05 | 17700 | 0.3170 | 0.2121 |
| 0.2378 | 1.05 | 17800 | 0.3133 | 0.2150 |
| 0.244 | 1.05 | 17900 | 0.3041 | 0.2092 |
| 0.232 | 1.05 | 18000 | 0.3113 | 0.2078 |
| 0.2511 | 1.05 | 18100 | 0.2830 | 0.2078 |
| 0.2487 | 1.05 | 18200 | 0.3015 | 0.2157 |
| 0.2302 | 1.05 | 18300 | 0.2813 | 0.2049 |
| 0.2256 | 1.05 | 18400 | 0.3110 | 0.1999 |
| 0.2273 | 1.05 | 18500 | 0.3183 | 0.2020 |
| 0.2256 | 1.06 | 18600 | 0.3083 | 0.2027 |
| 0.2242 | 1.06 | 18700 | 0.3042 | 0.1977 |
| 0.2436 | 1.06 | 18800 | 0.2861 | 0.1934 |
| 0.2345 | 1.06 | 18900 | 0.2876 | 0.1941 |
| 0.2482 | 1.06 | 19000 | 0.3003 | 0.2027 |
| 0.2236 | 1.06 | 19100 | 0.3085 | 0.1955 |
| 0.2199 | 1.06 | 19200 | 0.3150 | 0.2006 |
| 0.2305 | 1.06 | 19300 | 0.3172 | 0.2020 |
| 0.2395 | 1.06 | 19400 | 0.2879 | 0.2013 |
| 0.2301 | 1.06 | 19500 | 0.2818 | 0.2013 |
| 0.2496 | 1.07 | 19600 | 0.2883 | 0.2049 |
| 0.2332 | 1.07 | 19700 | 0.2962 | 0.2078 |
| 0.2193 | 1.07 | 19800 | 0.3092 | 0.2020 |
| 0.2268 | 1.07 | 19900 | 0.3079 | 0.2027 |
| 0.2262 | 1.07 | 20000 | 0.2986 | 0.2013 |
| 0.2391 | 1.07 | 20100 | 0.2974 | 0.1927 |
| 0.2374 | 1.07 | 20200 | 0.2993 | 0.1999 |
| 0.2319 | 1.07 | 20300 | 0.2974 | 0.2056 |
| 0.2273 | 1.07 | 20400 | 0.3122 | 0.2049 |
| 0.2303 | 1.07 | 20500 | 0.3386 | 0.2114 |
| 0.2362 | 1.08 | 20600 | 0.2870 | 0.2085 |
| 0.242 | 1.08 | 20700 | 0.2837 | 0.2135 |
| 0.3666 | 1.08 | 20800 | 0.2503 | 0.2171 |
| 0.3604 | 1.08 | 20900 | 0.2357 | 0.2085 |
| 0.3385 | 1.08 | 21000 | 0.2360 | 0.2085 |
| 0.3461 | 1.08 | 21100 | 0.2391 | 0.2070 |
| 0.3348 | 1.08 | 21200 | 0.2389 | 0.2106 |
| 0.3415 | 1.08 | 21300 | 0.2364 | 0.2128 |
| 0.36 | 1.08 | 21400 | 0.2356 | 0.2135 |
| 0.3513 | 1.08 | 21500 | 0.2439 | 0.2070 |
| 0.3097 | 1.09 | 21600 | 0.2308 | 0.2027 |
| 0.3396 | 1.09 | 21700 | 0.2405 | 0.2070 |
| 0.3427 | 1.09 | 21800 | 0.2391 | 0.2078 |
| 0.3612 | 1.09 | 21900 | 0.2463 | 0.2027 |
| 0.3626 | 1.09 | 22000 | 0.2335 | 0.2178 |
| 0.3252 | 1.09 | 22100 | 0.2361 | 0.2027 |
| 0.314 | 1.09 | 22200 | 0.2319 | 0.2049 |
| 0.3394 | 1.09 | 22300 | 0.2342 | 0.2078 |
| 0.3313 | 1.09 | 22400 | 0.2425 | 0.2056 |
| 0.3414 | 1.09 | 22500 | 0.2311 | 0.2085 |
| 0.3307 | 1.1 | 22600 | 0.2347 | 0.1991 |
| 0.3436 | 1.1 | 22700 | 0.2515 | 0.2063 |
| 0.3221 | 1.1 | 22800 | 0.2415 | 0.2013 |
| 0.3272 | 1.1 | 22900 | 0.2275 | 0.2035 |
| 0.3193 | 1.1 | 23000 | 0.2321 | 0.2013 |
| 0.329 | 1.1 | 23100 | 0.2319 | 0.1991 |
| 0.3451 | 1.1 | 23200 | 0.2306 | 0.2070 |
| 0.3312 | 1.1 | 23300 | 0.2385 | 0.1984 |
| 0.3266 | 1.1 | 23400 | 0.2372 | 0.2157 |
| 0.3258 | 1.1 | 23500 | 0.2401 | 0.2128 |
| 0.3178 | 1.11 | 23600 | 0.2453 | 0.2042 |
| 0.3253 | 1.11 | 23700 | 0.2451 | 0.2171 |
| 0.3308 | 1.11 | 23800 | 0.2309 | 0.2027 |
| 0.3243 | 1.11 | 23900 | 0.2411 | 0.2085 |
| 0.3225 | 1.11 | 24000 | 0.2352 | 0.2049 |
| 0.34 | 1.11 | 24100 | 0.2376 | 0.2063 |
| 0.3474 | 1.11 | 24200 | 0.2374 | 0.2056 |
| 0.3284 | 1.11 | 24300 | 0.2346 | 0.2135 |
| 0.3141 | 1.11 | 24400 | 0.2519 | 0.2078 |
| 0.3255 | 1.11 | 24500 | 0.2381 | 0.2078 |
| 0.3397 | 1.12 | 24600 | 0.2345 | 0.2114 |
| 0.3372 | 1.12 | 24700 | 0.2284 | 0.2106 |
| 0.3403 | 1.12 | 24800 | 0.2273 | 0.2128 |
| 0.3219 | 1.12 | 24900 | 0.2499 | 0.2063 |
| 0.3172 | 1.12 | 25000 | 0.2445 | 0.2106 |
| 0.3123 | 1.12 | 25100 | 0.2279 | 0.2142 |
| 0.3097 | 1.12 | 25200 | 0.2463 | 0.2135 |
| 0.3214 | 1.12 | 25300 | 0.2353 | 0.2114 |
| 0.3357 | 1.12 | 25400 | 0.2568 | 0.2121 |
| 0.3239 | 1.12 | 25500 | 0.2553 | 0.2114 |
| 0.3124 | 1.13 | 25600 | 0.2418 | 0.2200 |
| 0.3068 | 1.13 | 25700 | 0.2422 | 0.2150 |
| 0.3064 | 1.13 | 25800 | 0.2451 | 0.2070 |
| 0.2955 | 1.13 | 25900 | 0.2310 | 0.2085 |
| 0.3152 | 1.13 | 26000 | 0.2219 | 0.2099 |
| 0.2934 | 1.13 | 26100 | 0.2289 | 0.2135 |
| 0.3373 | 2.0 | 26200 | 0.2199 | 0.2164 |
| 0.3294 | 2.0 | 26300 | 0.2121 | 0.2078 |
| 0.3155 | 2.0 | 26400 | 0.2132 | 0.2092 |
| 0.3139 | 2.0 | 26500 | 0.2170 | 0.2135 |
| 0.3186 | 2.0 | 26600 | 0.2181 | 0.2092 |
| 0.2963 | 2.01 | 26700 | 0.2168 | 0.2142 |
| 0.3143 | 2.01 | 26800 | 0.2130 | 0.2092 |
| 0.3159 | 2.01 | 26900 | 0.2139 | 0.2099 |
| 0.2978 | 2.01 | 27000 | 0.2196 | 0.2193 |
| 0.2888 | 2.01 | 27100 | 0.2162 | 0.2121 |
| 0.2854 | 2.01 | 27200 | 0.2159 | 0.2135 |
| 0.2913 | 2.01 | 27300 | 0.2168 | 0.2121 |
| 0.2569 | 2.01 | 27400 | 0.2662 | 0.2114 |
| 0.2275 | 2.01 | 27500 | 0.2800 | 0.2006 |
| 0.2397 | 2.01 | 27600 | 0.2693 | 0.2114 |
| 0.2453 | 2.02 | 27700 | 0.2756 | 0.2070 |
| 0.233 | 2.02 | 27800 | 0.2884 | 0.2049 |
| 0.2226 | 2.02 | 27900 | 0.2765 | 0.2078 |
| 0.2278 | 2.02 | 28000 | 0.2944 | 0.2013 |
| 0.2241 | 2.02 | 28100 | 0.2897 | 0.2027 |
| 0.2388 | 2.02 | 28200 | 0.2881 | 0.2020 |
| 0.2301 | 2.02 | 28300 | 0.2844 | 0.1963 |
| 0.2164 | 2.02 | 28400 | 0.2978 | 0.2020 |
| 0.2286 | 2.02 | 28500 | 0.2919 | 0.1991 |
| 0.222 | 2.02 | 28600 | 0.2748 | 0.1970 |
| 0.2257 | 2.03 | 28700 | 0.2768 | 0.1984 |
| 0.222 | 2.03 | 28800 | 0.2665 | 0.1948 |
| 0.2333 | 2.03 | 28900 | 0.2796 | 0.1999 |
| 0.2243 | 2.03 | 29000 | 0.2913 | 0.2020 |
| 0.2556 | 2.03 | 29100 | 0.2831 | 0.1891 |
| 0.2185 | 2.03 | 29200 | 0.2784 | 0.1970 |
| 0.2207 | 2.03 | 29300 | 0.2896 | 0.1884 |
| 0.2326 | 2.03 | 29400 | 0.2907 | 0.2035 |
| 0.211 | 2.03 | 29500 | 0.2868 | 0.1927 |
| 0.2156 | 2.03 | 29600 | 0.2917 | 0.1977 |
| 0.2311 | 2.04 | 29700 | 0.2749 | 0.1999 |
| 0.2131 | 2.04 | 29800 | 0.2994 | 0.1905 |
| 0.2251 | 2.04 | 29900 | 0.2868 | 0.1970 |
| 0.2252 | 2.04 | 30000 | 0.2845 | 0.1999 |
| 0.2284 | 2.04 | 30100 | 0.2898 | 0.2035 |
| 0.2256 | 2.04 | 30200 | 0.2919 | 0.2006 |
| 0.2139 | 2.04 | 30300 | 0.2977 | 0.1999 |
| 0.2326 | 2.04 | 30400 | 0.2881 | 0.2078 |
| 0.2247 | 2.04 | 30500 | 0.2769 | 0.2027 |
| 0.226 | 2.04 | 30600 | 0.2787 | 0.2035 |
| 0.2234 | 2.05 | 30700 | 0.2764 | 0.2035 |
| 0.2268 | 2.05 | 30800 | 0.2925 | 0.1977 |
| 0.2113 | 2.05 | 30900 | 0.2949 | 0.2035 |
| 0.2301 | 2.05 | 31000 | 0.2882 | 0.2020 |
| 0.2153 | 2.05 | 31100 | 0.2915 | 0.1984 |
| 0.2341 | 2.05 | 31200 | 0.2841 | 0.2013 |
| 0.2276 | 2.05 | 31300 | 0.2804 | 0.1934 |
| 0.2244 | 2.05 | 31400 | 0.2824 | 0.1999 |
| 0.2088 | 2.05 | 31500 | 0.2928 | 0.1999 |
| 0.2073 | 2.05 | 31600 | 0.2911 | 0.1977 |
| 0.2232 | 2.06 | 31700 | 0.2925 | 0.1991 |
| 0.2197 | 2.06 | 31800 | 0.2799 | 0.1999 |
| 0.2265 | 2.06 | 31900 | 0.2780 | 0.1927 |
| 0.2246 | 2.06 | 32000 | 0.2841 | 0.1955 |
| 0.2289 | 2.06 | 32100 | 0.2702 | 0.1941 |
| 0.2118 | 2.06 | 32200 | 0.2926 | 0.1999 |
| 0.2127 | 2.06 | 32300 | 0.2932 | 0.1977 |
| 0.2227 | 2.06 | 32400 | 0.2883 | 0.1977 |
| 0.2213 | 2.06 | 32500 | 0.2920 | 0.1999 |
| 0.2352 | 2.06 | 32600 | 0.2742 | 0.2063 |
| 0.2157 | 2.07 | 32700 | 0.2796 | 0.1955 |
| 0.2157 | 2.07 | 32800 | 0.2870 | 0.2063 |
| 0.2085 | 2.07 | 32900 | 0.2765 | 0.2049 |
| 0.2138 | 2.07 | 33000 | 0.2915 | 0.2078 |
| 0.2145 | 2.07 | 33100 | 0.2912 | 0.1927 |
| 0.2104 | 2.07 | 33200 | 0.2702 | 0.1898 |
| 0.2196 | 2.07 | 33300 | 0.2677 | 0.1891 |
| 0.2265 | 2.07 | 33400 | 0.2855 | 0.1884 |
| 0.2132 | 2.07 | 33500 | 0.2962 | 0.1970 |
| 0.2202 | 2.07 | 33600 | 0.2948 | 0.1934 |
| 0.2253 | 2.08 | 33700 | 0.2820 | 0.2013 |
| 0.2749 | 2.08 | 33800 | 0.2453 | 0.1991 |
| 0.3476 | 2.08 | 33900 | 0.2301 | 0.2035 |
| 0.3255 | 2.08 | 34000 | 0.2231 | 0.1948 |
| 0.3219 | 2.08 | 34100 | 0.2354 | 0.1948 |
| 0.3436 | 2.08 | 34200 | 0.2154 | 0.1999 |
| 0.3203 | 2.08 | 34300 | 0.2268 | 0.1948 |
| 0.3551 | 2.08 | 34400 | 0.2189 | 0.1970 |
| 0.3304 | 2.08 | 34500 | 0.2204 | 0.1934 |
| 0.3227 | 2.08 | 34600 | 0.2222 | 0.2020 |
| 0.3117 | 2.09 | 34700 | 0.2287 | 0.1927 |
| 0.3231 | 2.09 | 34800 | 0.2229 | 0.1884 |
| 0.3302 | 2.09 | 34900 | 0.2262 | 0.1977 |
| 0.3522 | 2.09 | 35000 | 0.2313 | 0.2013 |
| 0.3218 | 2.09 | 35100 | 0.2218 | 0.1934 |
| 0.309 | 2.09 | 35200 | 0.2227 | 0.1905 |
| 0.3114 | 2.09 | 35300 | 0.2181 | 0.1891 |
| 0.3215 | 2.09 | 35400 | 0.2334 | 0.1963 |
| 0.3129 | 2.09 | 35500 | 0.2307 | 0.2027 |
| 0.3285 | 2.09 | 35600 | 0.2311 | 0.2006 |
| 0.3122 | 2.1 | 35700 | 0.2181 | 0.2070 |
| 0.3122 | 2.1 | 35800 | 0.2253 | 0.1934 |
| 0.3259 | 2.1 | 35900 | 0.2295 | 0.1934 |
| 0.3175 | 2.1 | 36000 | 0.2362 | 0.1977 |
| 0.3005 | 2.1 | 36100 | 0.2203 | 0.2013 |
| 0.3379 | 2.1 | 36200 | 0.2278 | 0.1934 |
| 0.3254 | 2.1 | 36300 | 0.2236 | 0.1891 |
| 0.2961 | 2.1 | 36400 | 0.2200 | 0.1927 |
| 0.3145 | 2.1 | 36500 | 0.2422 | 0.1984 |
| 0.3312 | 2.1 | 36600 | 0.2243 | 0.1991 |
| 0.297 | 2.11 | 36700 | 0.2180 | 0.2006 |
| 0.2973 | 2.11 | 36800 | 0.2261 | 0.1963 |
| 0.3078 | 2.11 | 36900 | 0.2255 | 0.1970 |
| 0.3081 | 2.11 | 37000 | 0.2349 | 0.2020 |
| 0.3069 | 2.11 | 37100 | 0.2189 | 0.1941 |
| 0.3339 | 2.11 | 37200 | 0.2242 | 0.1919 |
| 0.319 | 2.11 | 37300 | 0.2286 | 0.1927 |
| 0.3219 | 2.11 | 37400 | 0.2284 | 0.1999 |
| 0.2991 | 2.11 | 37500 | 0.2315 | 0.1948 |
| 0.3165 | 2.11 | 37600 | 0.2203 | 0.2006 |
| 0.3157 | 2.12 | 37700 | 0.2298 | 0.1970 |
| 0.3226 | 2.12 | 37800 | 0.2335 | 0.1963 |
| 0.3172 | 2.12 | 37900 | 0.2177 | 0.1963 |
| 0.2901 | 2.12 | 38000 | 0.2308 | 0.1970 |
| 0.3084 | 2.12 | 38100 | 0.2435 | 0.2020 |
| 0.2965 | 2.12 | 38200 | 0.2261 | 0.1970 |
| 0.2859 | 2.12 | 38300 | 0.2279 | 0.1963 |
| 0.3067 | 2.12 | 38400 | 0.2264 | 0.1905 |
| 0.3078 | 2.12 | 38500 | 0.2356 | 0.1999 |
| 0.3018 | 2.12 | 38600 | 0.2523 | 0.1884 |
| 0.3006 | 2.13 | 38700 | 0.2379 | 0.1991 |
| 0.2953 | 2.13 | 38800 | 0.2335 | 0.2035 |
| 0.3118 | 2.13 | 38900 | 0.2305 | 0.2085 |
| 0.2932 | 2.13 | 39000 | 0.2283 | 0.1984 |
| 0.2949 | 2.13 | 39100 | 0.2304 | 0.1941 |
| 0.2936 | 2.13 | 39200 | 0.2343 | 0.1970 |
| 0.3143 | 3.0 | 39300 | 0.2083 | 0.1955 |
| 0.3219 | 3.0 | 39400 | 0.2092 | 0.1963 |
| 0.3121 | 3.0 | 39500 | 0.2110 | 0.1934 |
| 0.3077 | 3.0 | 39600 | 0.2065 | 0.2027 |
| 0.2991 | 3.0 | 39700 | 0.2082 | 0.2070 |
| 0.2991 | 3.01 | 39800 | 0.2071 | 0.2013 |
| 0.3002 | 3.01 | 39900 | 0.2076 | 0.1999 |
| 0.2958 | 3.01 | 40000 | 0.2112 | 0.1955 |
| 0.2903 | 3.01 | 40100 | 0.2092 | 0.1948 |
| 0.2836 | 3.01 | 40200 | 0.2115 | 0.1948 |
| 0.2909 | 3.01 | 40300 | 0.2089 | 0.1948 |
| 0.2819 | 3.01 | 40400 | 0.2111 | 0.1919 |
| 0.2443 | 3.01 | 40500 | 0.2712 | 0.1941 |
| 0.2375 | 3.01 | 40600 | 0.2530 | 0.1919 |
| 0.2368 | 3.01 | 40700 | 0.2631 | 0.1955 |
| 0.225 | 3.02 | 40800 | 0.2684 | 0.1884 |
| 0.2296 | 3.02 | 40900 | 0.2657 | 0.1955 |
| 0.2193 | 3.02 | 41000 | 0.2657 | 0.1898 |
| 0.2118 | 3.02 | 41100 | 0.2737 | 0.1891 |
| 0.2155 | 3.02 | 41200 | 0.2821 | 0.1948 |
| 0.2298 | 3.02 | 41300 | 0.2765 | 0.1891 |
| 0.2067 | 3.02 | 41400 | 0.2724 | 0.1898 |
| 0.2065 | 3.02 | 41500 | 0.2820 | 0.1848 |
| 0.218 | 3.02 | 41600 | 0.2782 | 0.1891 |
| 0.212 | 3.02 | 41700 | 0.2724 | 0.1941 |
| 0.2109 | 3.03 | 41800 | 0.2715 | 0.1891 |
| 0.2094 | 3.03 | 41900 | 0.2687 | 0.1876 |
| 0.2256 | 3.03 | 42000 | 0.2843 | 0.1919 |
| 0.2156 | 3.03 | 42100 | 0.2742 | 0.1905 |
| 0.2397 | 3.03 | 42200 | 0.2744 | 0.1941 |
| 0.2097 | 3.03 | 42300 | 0.2690 | 0.1869 |
| 0.228 | 3.03 | 42400 | 0.2614 | 0.2042 |
| 0.2105 | 3.03 | 42500 | 0.2782 | 0.1833 |
| 0.2088 | 3.03 | 42600 | 0.2973 | 0.1912 |
| 0.2165 | 3.03 | 42700 | 0.2891 | 0.1898 |
| 0.2108 | 3.04 | 42800 | 0.2601 | 0.1905 |
| 0.2059 | 3.04 | 42900 | 0.2823 | 0.1919 |
| 0.218 | 3.04 | 43000 | 0.2801 | 0.1898 |
| 0.2198 | 3.04 | 43100 | 0.2717 | 0.1848 |
| 0.2244 | 3.04 | 43200 | 0.2548 | 0.1955 |
| 0.2158 | 3.04 | 43300 | 0.2697 | 0.1963 |
| 0.2093 | 3.04 | 43400 | 0.2917 | 0.1970 |
| 0.2283 | 3.04 | 43500 | 0.2666 | 0.1912 |
| 0.2071 | 3.04 | 43600 | 0.2588 | 0.1891 |
| 0.2122 | 3.04 | 43700 | 0.2674 | 0.1876 |
| 0.2181 | 3.05 | 43800 | 0.2882 | 0.1941 |
| 0.218 | 3.05 | 43900 | 0.2624 | 0.1898 |
| 0.2075 | 3.05 | 44000 | 0.2743 | 0.1819 |
| 0.2208 | 3.05 | 44100 | 0.2809 | 0.1912 |
| 0.2221 | 3.05 | 44200 | 0.2728 | 0.1919 |
| 0.222 | 3.05 | 44300 | 0.2790 | 0.1855 |
| 0.2254 | 3.05 | 44400 | 0.2683 | 0.1884 |
| 0.2153 | 3.05 | 44500 | 0.2624 | 0.2013 |
| 0.2038 | 3.05 | 44600 | 0.2732 | 0.1905 |
| 0.1955 | 3.05 | 44700 | 0.2570 | 0.1840 |
| 0.2203 | 3.06 | 44800 | 0.2851 | 0.1812 |
| 0.2032 | 3.06 | 44900 | 0.2646 | 0.1833 |
| 0.2201 | 3.06 | 45000 | 0.2763 | 0.1848 |
| 0.2129 | 3.06 | 45100 | 0.2844 | 0.1891 |
| 0.2276 | 3.06 | 45200 | 0.2646 | 0.1840 |
| 0.205 | 3.06 | 45300 | 0.2802 | 0.1862 |
| 0.2164 | 3.06 | 45400 | 0.2687 | 0.1797 |
| 0.2226 | 3.06 | 45500 | 0.2732 | 0.1804 |
| 0.2061 | 3.06 | 45600 | 0.2829 | 0.1855 |
| 0.2184 | 3.06 | 45700 | 0.2676 | 0.1919 |
| 0.2151 | 3.07 | 45800 | 0.2881 | 0.1855 |
| 0.2118 | 3.07 | 45900 | 0.2780 | 0.1855 |
| 0.2007 | 3.07 | 46000 | 0.2674 | 0.1855 |
| 0.206 | 3.07 | 46100 | 0.2828 | 0.1884 |
| 0.2171 | 3.07 | 46200 | 0.2843 | 0.1783 |
| 0.2136 | 3.07 | 46300 | 0.2782 | 0.1855 |
| 0.2123 | 3.07 | 46400 | 0.2730 | 0.1876 |
| 0.2197 | 3.07 | 46500 | 0.2881 | 0.1819 |
| 0.1985 | 3.07 | 46600 | 0.2831 | 0.1848 |
| 0.2174 | 3.07 | 46700 | 0.2676 | 0.1769 |
| 0.2144 | 3.08 | 46800 | 0.2916 | 0.1840 |
| 0.2974 | 3.08 | 46900 | 0.2193 | 0.1869 |
| 0.3292 | 3.08 | 47000 | 0.2193 | 0.1898 |
| 0.3086 | 3.08 | 47100 | 0.2194 | 0.1840 |
| 0.3122 | 3.08 | 47200 | 0.2285 | 0.1941 |
| 0.3243 | 3.08 | 47300 | 0.2159 | 0.1912 |
| 0.3107 | 3.08 | 47400 | 0.2226 | 0.1862 |
| 0.3441 | 3.08 | 47500 | 0.2195 | 0.1833 |
| 0.3099 | 3.08 | 47600 | 0.2210 | 0.1927 |
| 0.2827 | 3.08 | 47700 | 0.2297 | 0.1891 |
| 0.3002 | 3.09 | 47800 | 0.2242 | 0.1898 |
| 0.3076 | 3.09 | 47900 | 0.2242 | 0.1855 |
| 0.3199 | 3.09 | 48000 | 0.2179 | 0.1898 |
| 0.3239 | 3.09 | 48100 | 0.2228 | 0.1840 |
| 0.3069 | 3.09 | 48200 | 0.2191 | 0.1855 |
| 0.3061 | 3.09 | 48300 | 0.2075 | 0.1898 |
| 0.3129 | 3.09 | 48400 | 0.2223 | 0.1891 |
| 0.3134 | 3.09 | 48500 | 0.2247 | 0.1912 |
| 0.3198 | 3.09 | 48600 | 0.2137 | 0.1876 |
| 0.3209 | 3.09 | 48700 | 0.2263 | 0.1898 |
| 0.3193 | 3.1 | 48800 | 0.2256 | 0.1869 |
| 0.3247 | 3.1 | 48900 | 0.2220 | 0.1898 |
| 0.3112 | 3.1 | 49000 | 0.2166 | 0.1891 |
| 0.2954 | 3.1 | 49100 | 0.2234 | 0.1862 |
| 0.2933 | 3.1 | 49200 | 0.2172 | 0.1848 |
| 0.3214 | 3.1 | 49300 | 0.2198 | 0.1941 |
| 0.3241 | 3.1 | 49400 | 0.2142 | 0.1869 |
| 0.3025 | 3.1 | 49500 | 0.2218 | 0.1912 |
| 0.3069 | 3.1 | 49600 | 0.2306 | 0.1819 |
| 0.3089 | 3.1 | 49700 | 0.2203 | 0.1862 |
| 0.299 | 3.11 | 49800 | 0.2155 | 0.1884 |
| 0.3079 | 3.11 | 49900 | 0.2225 | 0.1862 |
| 0.3123 | 3.11 | 50000 | 0.2225 | 0.1891 |
| 0.2964 | 3.11 | 50100 | 0.2199 | 0.1869 |
| 0.3143 | 3.11 | 50200 | 0.2181 | 0.1991 |
| 0.3266 | 3.11 | 50300 | 0.2178 | 0.1912 |
| 0.3114 | 3.11 | 50400 | 0.2132 | 0.1862 |
| 0.2994 | 3.11 | 50500 | 0.2152 | 0.1927 |
| 0.2932 | 3.11 | 50600 | 0.2186 | 0.1891 |
| 0.3215 | 3.11 | 50700 | 0.2150 | 0.1819 |
| 0.3103 | 3.12 | 50800 | 0.2153 | 0.1905 |
| 0.3129 | 3.12 | 50900 | 0.2223 | 0.1905 |
| 0.3167 | 3.12 | 51000 | 0.2185 | 0.1884 |
| 0.2932 | 3.12 | 51100 | 0.2316 | 0.1876 |
| 0.2968 | 3.12 | 51200 | 0.2314 | 0.1919 |
| 0.2884 | 3.12 | 51300 | 0.2220 | 0.1783 |
| 0.2943 | 3.12 | 51400 | 0.2239 | 0.1912 |
| 0.2994 | 3.12 | 51500 | 0.2139 | 0.1833 |
| 0.3172 | 3.12 | 51600 | 0.2319 | 0.1919 |
| 0.2828 | 3.12 | 51700 | 0.2315 | 0.1991 |
| 0.3104 | 3.13 | 51800 | 0.2253 | 0.1948 |
| 0.285 | 3.13 | 51900 | 0.2143 | 0.1963 |
| 0.2916 | 3.13 | 52000 | 0.2237 | 0.1970 |
| 0.2787 | 3.13 | 52100 | 0.2177 | 0.1984 |
| 0.2909 | 3.13 | 52200 | 0.2290 | 0.1912 |
| 0.2967 | 4.0 | 52300 | 0.2107 | 0.1955 |
| 0.3057 | 4.0 | 52400 | 0.2052 | 0.1869 |
| 0.3248 | 4.0 | 52500 | 0.1982 | 0.1812 |
| 0.3072 | 4.0 | 52600 | 0.1969 | 0.1754 |
| 0.2967 | 4.0 | 52700 | 0.1981 | 0.1804 |
| 0.2984 | 4.01 | 52800 | 0.2031 | 0.1826 |
| 0.2878 | 4.01 | 52900 | 0.2015 | 0.1927 |
| 0.308 | 4.01 | 53000 | 0.2009 | 0.1919 |
| 0.2843 | 4.01 | 53100 | 0.2020 | 0.1912 |
| 0.2678 | 4.01 | 53200 | 0.2017 | 0.1819 |
| 0.2779 | 4.01 | 53300 | 0.2041 | 0.1812 |
| 0.2886 | 4.01 | 53400 | 0.1994 | 0.1905 |
| 0.2419 | 4.01 | 53500 | 0.2083 | 0.1833 |
| 0.2317 | 4.01 | 53600 | 0.2683 | 0.1848 |
| 0.223 | 4.01 | 53700 | 0.2444 | 0.1862 |
| 0.2385 | 4.02 | 53800 | 0.2605 | 0.1819 |
| 0.2208 | 4.02 | 53900 | 0.2630 | 0.1912 |
| 0.2116 | 4.02 | 54000 | 0.2589 | 0.1833 |
| 0.2188 | 4.02 | 54100 | 0.2489 | 0.1797 |
| 0.2001 | 4.02 | 54200 | 0.2675 | 0.1812 |
| 0.212 | 4.02 | 54300 | 0.2607 | 0.1761 |
| 0.2163 | 4.02 | 54400 | 0.2636 | 0.1783 |
| 0.1991 | 4.02 | 54500 | 0.2659 | 0.1804 |
| 0.2156 | 4.02 | 54600 | 0.2517 | 0.1769 |
| 0.1987 | 4.02 | 54700 | 0.2736 | 0.1833 |
| 0.2122 | 4.03 | 54800 | 0.2412 | 0.1754 |
| 0.2072 | 4.03 | 54900 | 0.2512 | 0.1869 |
| 0.2043 | 4.03 | 55000 | 0.2564 | 0.1819 |
| 0.2139 | 4.03 | 55100 | 0.2756 | 0.1840 |
| 0.2211 | 4.03 | 55200 | 0.2683 | 0.1826 |
| 0.2114 | 4.03 | 55300 | 0.2725 | 0.1769 |
| 0.2002 | 4.03 | 55400 | 0.2584 | 0.1797 |
| 0.2106 | 4.03 | 55500 | 0.2793 | 0.1790 |
| 0.2075 | 4.03 | 55600 | 0.2626 | 0.1826 |
| 0.2057 | 4.03 | 55700 | 0.2635 | 0.1783 |
| 0.2126 | 4.04 | 55800 | 0.2661 | 0.1776 |
| 0.2072 | 4.04 | 55900 | 0.2584 | 0.1776 |
| 0.2039 | 4.04 | 56000 | 0.2740 | 0.1826 |
| 0.2138 | 4.04 | 56100 | 0.2700 | 0.1797 |
| 0.2082 | 4.04 | 56200 | 0.2527 | 0.1876 |
| 0.213 | 4.04 | 56300 | 0.2631 | 0.1833 |
| 0.191 | 4.04 | 56400 | 0.2673 | 0.1812 |
| 0.2026 | 4.04 | 56500 | 0.2681 | 0.1855 |
| 0.221 | 4.04 | 56600 | 0.2660 | 0.1797 |
| 0.2026 | 4.04 | 56700 | 0.2719 | 0.1819 |
| 0.1954 | 4.05 | 56800 | 0.2785 | 0.1747 |
| 0.2111 | 4.05 | 56900 | 0.2755 | 0.1812 |
| 0.2077 | 4.05 | 57000 | 0.2726 | 0.1848 |
| 0.2025 | 4.05 | 57100 | 0.2690 | 0.1884 |
| 0.2167 | 4.05 | 57200 | 0.2719 | 0.1869 |
| 0.2062 | 4.05 | 57300 | 0.2660 | 0.1819 |
| 0.2245 | 4.05 | 57400 | 0.2756 | 0.1776 |
| 0.2185 | 4.05 | 57500 | 0.2668 | 0.1776 |
| 0.1968 | 4.05 | 57600 | 0.2810 | 0.1776 |
| 0.2016 | 4.05 | 57700 | 0.2894 | 0.1776 |
| 0.1921 | 4.06 | 57800 | 0.2772 | 0.1797 |
| 0.2078 | 4.06 | 57900 | 0.2874 | 0.1862 |
| 0.209 | 4.06 | 58000 | 0.2643 | 0.1769 |
| 0.2095 | 4.06 | 58100 | 0.2635 | 0.1819 |
| 0.2098 | 4.06 | 58200 | 0.2710 | 0.1797 |
| 0.2088 | 4.06 | 58300 | 0.2700 | 0.1747 |
| 0.202 | 4.06 | 58400 | 0.2748 | 0.1783 |
| 0.2113 | 4.06 | 58500 | 0.2794 | 0.1819 |
| 0.2108 | 4.06 | 58600 | 0.2658 | 0.1804 |
| 0.2001 | 4.06 | 58700 | 0.2764 | 0.1797 |
| 0.2171 | 4.07 | 58800 | 0.2689 | 0.1797 |
| 0.2024 | 4.07 | 58900 | 0.2509 | 0.1761 |
| 0.1994 | 4.07 | 59000 | 0.2769 | 0.1797 |
| 0.1923 | 4.07 | 59100 | 0.2518 | 0.1776 |
| 0.1998 | 4.07 | 59200 | 0.2672 | 0.1769 |
| 0.2075 | 4.07 | 59300 | 0.2704 | 0.1840 |
| 0.2056 | 4.07 | 59400 | 0.2723 | 0.1826 |
| 0.2107 | 4.07 | 59500 | 0.2671 | 0.1776 |
| 0.213 | 4.07 | 59600 | 0.2850 | 0.1797 |
| 0.205 | 4.07 | 59700 | 0.2790 | 0.1790 |
| 0.2042 | 4.08 | 59800 | 0.2841 | 0.1826 |
| 0.2096 | 4.08 | 59900 | 0.2776 | 0.1783 |
| 0.3228 | 4.08 | 60000 | 0.2220 | 0.1812 |
| 0.3277 | 4.08 | 60100 | 0.2229 | 0.1869 |
| 0.311 | 4.08 | 60200 | 0.2323 | 0.1862 |
| 0.2944 | 4.08 | 60300 | 0.2147 | 0.1754 |
| 0.32 | 4.08 | 60400 | 0.2103 | 0.1783 |
| 0.2769 | 4.08 | 60500 | 0.2209 | 0.1797 |
| 0.3392 | 4.08 | 60600 | 0.2145 | 0.1783 |
| 0.3189 | 4.08 | 60700 | 0.2079 | 0.1840 |
| 0.2825 | 4.09 | 60800 | 0.2262 | 0.1869 |
| 0.3007 | 4.09 | 60900 | 0.2121 | 0.1855 |
| 0.2973 | 4.09 | 61000 | 0.2151 | 0.1819 |
| 0.3367 | 4.09 | 61100 | 0.2121 | 0.1869 |
| 0.3168 | 4.09 | 61200 | 0.2191 | 0.1747 |
| 0.2964 | 4.09 | 61300 | 0.2148 | 0.1804 |
| 0.2936 | 4.09 | 61400 | 0.2111 | 0.1783 |
| 0.3022 | 4.09 | 61500 | 0.2175 | 0.1812 |
| 0.2972 | 4.09 | 61600 | 0.2218 | 0.1833 |
| 0.3069 | 4.09 | 61700 | 0.2135 | 0.1826 |
| 0.3027 | 4.1 | 61800 | 0.2226 | 0.1812 |
| 0.2917 | 4.1 | 61900 | 0.2166 | 0.1812 |
| 0.311 | 4.1 | 62000 | 0.2164 | 0.1761 |
| 0.3 | 4.1 | 62100 | 0.2227 | 0.1797 |
| 0.2809 | 4.1 | 62200 | 0.2151 | 0.1747 |
| 0.3062 | 4.1 | 62300 | 0.2139 | 0.1704 |
| 0.3063 | 4.1 | 62400 | 0.2184 | 0.1797 |
| 0.3006 | 4.1 | 62500 | 0.2087 | 0.1776 |
| 0.2898 | 4.1 | 62600 | 0.2180 | 0.1790 |
| 0.2937 | 4.1 | 62700 | 0.2124 | 0.1804 |
| 0.2906 | 4.11 | 62800 | 0.2219 | 0.1804 |
| 0.2842 | 4.11 | 62900 | 0.2163 | 0.1761 |
| 0.2911 | 4.11 | 63000 | 0.2210 | 0.1754 |
| 0.2983 | 4.11 | 63100 | 0.2236 | 0.1804 |
| 0.2948 | 4.11 | 63200 | 0.2132 | 0.1797 |
| 0.3152 | 4.11 | 63300 | 0.2132 | 0.1733 |
| 0.3081 | 4.11 | 63400 | 0.2119 | 0.1754 |
| 0.3145 | 4.11 | 63500 | 0.2123 | 0.1876 |
| 0.2867 | 4.11 | 63600 | 0.2149 | 0.1826 |
| 0.2827 | 4.11 | 63700 | 0.2097 | 0.1718 |
| 0.3117 | 4.12 | 63800 | 0.2143 | 0.1769 |
| 0.2909 | 4.12 | 63900 | 0.2184 | 0.1776 |
| 0.2971 | 4.12 | 64000 | 0.2187 | 0.1754 |
| 0.2895 | 4.12 | 64100 | 0.2139 | 0.1704 |
| 0.2885 | 4.12 | 64200 | 0.2291 | 0.1761 |
| 0.2848 | 4.12 | 64300 | 0.2132 | 0.1826 |
| 0.2951 | 4.12 | 64400 | 0.2136 | 0.1869 |
| 0.2839 | 4.12 | 64500 | 0.2149 | 0.1804 |
| 0.2983 | 4.12 | 64600 | 0.2146 | 0.1826 |
| 0.3029 | 4.12 | 64700 | 0.2327 | 0.1797 |
| 0.2775 | 4.13 | 64800 | 0.2222 | 0.1797 |
| 0.2813 | 4.13 | 64900 | 0.2234 | 0.1819 |
| 0.2822 | 4.13 | 65000 | 0.2126 | 0.1891 |
| 0.2757 | 4.13 | 65100 | 0.2183 | 0.1919 |
| 0.2792 | 4.13 | 65200 | 0.2147 | 0.1855 |
| 0.2909 | 4.13 | 65300 | 0.2157 | 0.1869 |
| 0.2946 | 5.0 | 65400 | 0.1955 | 0.1826 |
| 0.3022 | 5.0 | 65500 | 0.1938 | 0.1848 |
| 0.3086 | 5.0 | 65600 | 0.1910 | 0.1790 |
| 0.2887 | 5.0 | 65700 | 0.1915 | 0.1776 |
| 0.2941 | 5.0 | 65800 | 0.1924 | 0.1747 |
| 0.2906 | 5.01 | 65900 | 0.1933 | 0.1833 |
| 0.2876 | 5.01 | 66000 | 0.1967 | 0.1725 |
| 0.2992 | 5.01 | 66100 | 0.1926 | 0.1740 |
| 0.2769 | 5.01 | 66200 | 0.1940 | 0.1797 |
| 0.2703 | 5.01 | 66300 | 0.1980 | 0.1711 |
| 0.2777 | 5.01 | 66400 | 0.1996 | 0.1704 |
| 0.2908 | 5.01 | 66500 | 0.1954 | 0.1754 |
| 0.2467 | 5.01 | 66600 | 0.1982 | 0.1869 |
| 0.2192 | 5.01 | 66700 | 0.2626 | 0.1776 |
| 0.2227 | 5.01 | 66800 | 0.2472 | 0.1725 |
| 0.2192 | 5.02 | 66900 | 0.2449 | 0.1776 |
| 0.206 | 5.02 | 67000 | 0.2669 | 0.1769 |
| 0.1988 | 5.02 | 67100 | 0.2567 | 0.1747 |
| 0.2125 | 5.02 | 67200 | 0.2577 | 0.1790 |
| 0.1962 | 5.02 | 67300 | 0.2639 | 0.1747 |
| 0.2101 | 5.02 | 67400 | 0.2570 | 0.1697 |
| 0.208 | 5.02 | 67500 | 0.2584 | 0.1776 |
| 0.1963 | 5.02 | 67600 | 0.2519 | 0.1740 |
| 0.2067 | 5.02 | 67700 | 0.2607 | 0.1711 |
| 0.1925 | 5.02 | 67800 | 0.2645 | 0.1733 |
| 0.2107 | 5.03 | 67900 | 0.2379 | 0.1797 |
| 0.1995 | 5.03 | 68000 | 0.2425 | 0.1790 |
| 0.2023 | 5.03 | 68100 | 0.2626 | 0.1769 |
| 0.2037 | 5.03 | 68200 | 0.2751 | 0.1754 |
| 0.2226 | 5.03 | 68300 | 0.2499 | 0.1747 |
| 0.2103 | 5.03 | 68400 | 0.2634 | 0.1718 |
| 0.2037 | 5.03 | 68500 | 0.2595 | 0.1804 |
| 0.2104 | 5.03 | 68600 | 0.2699 | 0.1697 |
| 0.1998 | 5.03 | 68700 | 0.2596 | 0.1819 |
| 0.2026 | 5.03 | 68800 | 0.2644 | 0.1740 |
| 0.2032 | 5.04 | 68900 | 0.2718 | 0.1718 |
| 0.1919 | 5.04 | 69000 | 0.2606 | 0.1797 |
| 0.2049 | 5.04 | 69100 | 0.2719 | 0.1733 |
| 0.2086 | 5.04 | 69200 | 0.2700 | 0.1769 |
| 0.2118 | 5.04 | 69300 | 0.2556 | 0.1747 |
| 0.2078 | 5.04 | 69400 | 0.2529 | 0.1733 |
| 0.1882 | 5.04 | 69500 | 0.2753 | 0.1804 |
| 0.2077 | 5.04 | 69600 | 0.2801 | 0.1769 |
| 0.2073 | 5.04 | 69700 | 0.2695 | 0.1769 |
| 0.1983 | 5.04 | 69800 | 0.2611 | 0.1747 |
| 0.2117 | 5.05 | 69900 | 0.2581 | 0.1718 |
| 0.1982 | 5.05 | 70000 | 0.2714 | 0.1697 |
| 0.201 | 5.05 | 70100 | 0.2596 | 0.1689 |
| 0.2084 | 5.05 | 70200 | 0.2617 | 0.1653 |
| 0.2003 | 5.05 | 70300 | 0.2681 | 0.1711 |
| 0.2173 | 5.05 | 70400 | 0.2590 | 0.1733 |
| 0.2118 | 5.05 | 70500 | 0.2595 | 0.1689 |
| 0.197 | 5.05 | 70600 | 0.2549 | 0.1754 |
| 0.1956 | 5.05 | 70700 | 0.2685 | 0.1718 |
| 0.1923 | 5.05 | 70800 | 0.2755 | 0.1769 |
| 0.1949 | 5.06 | 70900 | 0.2722 | 0.1776 |
| 0.2007 | 5.06 | 71000 | 0.2611 | 0.1769 |
| 0.2154 | 5.06 | 71100 | 0.2604 | 0.1740 |
| 0.1999 | 5.06 | 71200 | 0.2556 | 0.1747 |
| 0.2167 | 5.06 | 71300 | 0.2622 | 0.1797 |
| 0.1968 | 5.06 | 71400 | 0.2670 | 0.1718 |
| 0.2009 | 5.06 | 71500 | 0.2727 | 0.1747 |
| 0.2017 | 5.06 | 71600 | 0.2769 | 0.1826 |
| 0.2105 | 5.06 | 71700 | 0.2628 | 0.1776 |
| 0.2071 | 5.06 | 71800 | 0.2552 | 0.1848 |
| 0.1984 | 5.07 | 71900 | 0.2592 | 0.1704 |
| 0.1967 | 5.07 | 72000 | 0.2612 | 0.1733 |
| 0.19 | 5.07 | 72100 | 0.2701 | 0.1783 |
| 0.2034 | 5.07 | 72200 | 0.2723 | 0.1740 |
| 0.1946 | 5.07 | 72300 | 0.2743 | 0.1740 |
| 0.2078 | 5.07 | 72400 | 0.2653 | 0.1682 |
| 0.2034 | 5.07 | 72500 | 0.2751 | 0.1761 |
| 0.2018 | 5.07 | 72600 | 0.2692 | 0.1740 |
| 0.1916 | 5.07 | 72700 | 0.2768 | 0.1797 |
| 0.2042 | 5.07 | 72800 | 0.2704 | 0.1754 |
| 0.2037 | 5.08 | 72900 | 0.2735 | 0.1711 |
| 0.2286 | 5.08 | 73000 | 0.2196 | 0.1797 |
| 0.3236 | 5.08 | 73100 | 0.2112 | 0.1855 |
| 0.2937 | 5.08 | 73200 | 0.2094 | 0.1819 |
| 0.2927 | 5.08 | 73300 | 0.2214 | 0.1884 |
| 0.2958 | 5.08 | 73400 | 0.2187 | 0.1812 |
| 0.303 | 5.08 | 73500 | 0.2153 | 0.1776 |
| 0.3022 | 5.08 | 73600 | 0.2164 | 0.1812 |
| 0.3054 | 5.08 | 73700 | 0.2028 | 0.1790 |
| 0.294 | 5.08 | 73800 | 0.2164 | 0.1697 |
| 0.2916 | 5.09 | 73900 | 0.2229 | 0.1747 |
| 0.2981 | 5.09 | 74000 | 0.2102 | 0.1776 |
| 0.2925 | 5.09 | 74100 | 0.2197 | 0.1790 |
| 0.3208 | 5.09 | 74200 | 0.2216 | 0.1783 |
| 0.2969 | 5.09 | 74300 | 0.2122 | 0.1790 |
| 0.2895 | 5.09 | 74400 | 0.2166 | 0.1804 |
| 0.2759 | 5.09 | 74500 | 0.2171 | 0.1769 |
| 0.2912 | 5.09 | 74600 | 0.2169 | 0.1689 |
| 0.2918 | 5.09 | 74700 | 0.2167 | 0.1840 |
| 0.3058 | 5.09 | 74800 | 0.2184 | 0.1740 |
| 0.2914 | 5.1 | 74900 | 0.2070 | 0.1747 |
| 0.2984 | 5.1 | 75000 | 0.2182 | 0.1740 |
| 0.278 | 5.1 | 75100 | 0.2200 | 0.1740 |
| 0.2825 | 5.1 | 75200 | 0.2099 | 0.1761 |
| 0.2946 | 5.1 | 75300 | 0.2126 | 0.1733 |
| 0.2885 | 5.1 | 75400 | 0.2150 | 0.1725 |
| 0.2994 | 5.1 | 75500 | 0.2055 | 0.1790 |
| 0.2783 | 5.1 | 75600 | 0.2179 | 0.1747 |
| 0.2889 | 5.1 | 75700 | 0.2121 | 0.1761 |
| 0.2945 | 5.1 | 75800 | 0.2129 | 0.1804 |
| 0.2737 | 5.11 | 75900 | 0.2107 | 0.1718 |
| 0.286 | 5.11 | 76000 | 0.2124 | 0.1754 |
| 0.288 | 5.11 | 76100 | 0.2105 | 0.1740 |
| 0.2714 | 5.11 | 76200 | 0.2196 | 0.1740 |
| 0.3 | 5.11 | 76300 | 0.2190 | 0.1769 |
| 0.3108 | 5.11 | 76400 | 0.2118 | 0.1783 |
| 0.3053 | 5.11 | 76500 | 0.2148 | 0.1754 |
| 0.3021 | 5.11 | 76600 | 0.2137 | 0.1776 |
| 0.2831 | 5.11 | 76700 | 0.2090 | 0.1769 |
| 0.2705 | 5.11 | 76800 | 0.2126 | 0.1733 |
| 0.3132 | 5.12 | 76900 | 0.2083 | 0.1740 |
| 0.2816 | 5.12 | 77000 | 0.2159 | 0.1769 |
| 0.2901 | 5.12 | 77100 | 0.2175 | 0.1769 |
| 0.2767 | 5.12 | 77200 | 0.2199 | 0.1704 |
| 0.2875 | 5.12 | 77300 | 0.2172 | 0.1790 |
| 0.279 | 5.12 | 77400 | 0.2186 | 0.1769 |
| 0.2784 | 5.12 | 77500 | 0.2276 | 0.1761 |
| 0.2965 | 5.12 | 77600 | 0.2161 | 0.1783 |
| 0.2895 | 5.12 | 77700 | 0.2276 | 0.1783 |
| 0.2753 | 5.12 | 77800 | 0.2280 | 0.1718 |
| 0.2775 | 5.13 | 77900 | 0.2241 | 0.1761 |
| 0.2644 | 5.13 | 78000 | 0.2263 | 0.1790 |
| 0.2909 | 5.13 | 78100 | 0.2221 | 0.1812 |
| 0.2622 | 5.13 | 78200 | 0.2178 | 0.1797 |
| 0.275 | 5.13 | 78300 | 0.2135 | 0.1783 |
| 0.2706 | 5.13 | 78400 | 0.2115 | 0.1783 |
| 0.2967 | 6.0 | 78500 | 0.1939 | 0.1761 |
| 0.3006 | 6.0 | 78600 | 0.1912 | 0.1761 |
| 0.2895 | 6.0 | 78700 | 0.1899 | 0.1718 |
| 0.2918 | 6.0 | 78800 | 0.1874 | 0.1804 |
| 0.2946 | 6.0 | 78900 | 0.1908 | 0.1776 |
| 0.2774 | 6.01 | 79000 | 0.1907 | 0.1761 |
| 0.2835 | 6.01 | 79100 | 0.1890 | 0.1718 |
| 0.2867 | 6.01 | 79200 | 0.1898 | 0.1747 |
| 0.2778 | 6.01 | 79300 | 0.1911 | 0.1769 |
| 0.2654 | 6.01 | 79400 | 0.1906 | 0.1761 |
| 0.2769 | 6.01 | 79500 | 0.1902 | 0.1761 |
| 0.2697 | 6.01 | 79600 | 0.1908 | 0.1747 |
| 0.237 | 6.01 | 79700 | 0.2295 | 0.1740 |
| 0.2045 | 6.01 | 79800 | 0.2397 | 0.1769 |
| 0.2071 | 6.01 | 79900 | 0.2405 | 0.1697 |
| 0.2105 | 6.02 | 80000 | 0.2430 | 0.1754 |
| 0.1955 | 6.02 | 80100 | 0.2478 | 0.1769 |
| 0.196 | 6.02 | 80200 | 0.2424 | 0.1776 |
| 0.2045 | 6.02 | 80300 | 0.2508 | 0.1697 |
| 0.1948 | 6.02 | 80400 | 0.2571 | 0.1711 |
| 0.2096 | 6.02 | 80500 | 0.2477 | 0.1689 |
| 0.1928 | 6.02 | 80600 | 0.2503 | 0.1675 |
| 0.1888 | 6.02 | 80700 | 0.2540 | 0.1682 |
| 0.2006 | 6.02 | 80800 | 0.2587 | 0.1697 |
| 0.2008 | 6.02 | 80900 | 0.2546 | 0.1704 |
| 0.2018 | 6.03 | 81000 | 0.2413 | 0.1711 |
| 0.1937 | 6.03 | 81100 | 0.2407 | 0.1689 |
| 0.2106 | 6.03 | 81200 | 0.2513 | 0.1632 |
| 0.1949 | 6.03 | 81300 | 0.2563 | 0.1668 |
| 0.2207 | 6.03 | 81400 | 0.2649 | 0.1646 |
| 0.1913 | 6.03 | 81500 | 0.2543 | 0.1682 |
| 0.1991 | 6.03 | 81600 | 0.2575 | 0.1740 |
| 0.1992 | 6.03 | 81700 | 0.2597 | 0.1639 |
| 0.1917 | 6.03 | 81800 | 0.2571 | 0.1725 |
| 0.191 | 6.03 | 81900 | 0.2595 | 0.1718 |
| 0.1992 | 6.04 | 82000 | 0.2494 | 0.1697 |
| 0.1839 | 6.04 | 82100 | 0.2594 | 0.1697 |
| 0.1943 | 6.04 | 82200 | 0.2655 | 0.1733 |
| 0.2039 | 6.04 | 82300 | 0.2690 | 0.1725 |
| 0.2011 | 6.04 | 82400 | 0.2555 | 0.1711 |
| 0.1964 | 6.04 | 82500 | 0.2590 | 0.1718 |
| 0.1858 | 6.04 | 82600 | 0.2659 | 0.1704 |
| 0.2113 | 6.04 | 82700 | 0.2534 | 0.1697 |
| 0.1883 | 6.04 | 82800 | 0.2519 | 0.1711 |
| 0.2005 | 6.04 | 82900 | 0.2581 | 0.1711 |
| 0.2013 | 6.05 | 83000 | 0.2619 | 0.1711 |
| 0.1994 | 6.05 | 83100 | 0.2566 | 0.1661 |
| 0.1949 | 6.05 | 83200 | 0.2635 | 0.1711 |
| 0.2002 | 6.05 | 83300 | 0.2551 | 0.1689 |
| 0.1992 | 6.05 | 83400 | 0.2622 | 0.1747 |
| 0.2039 | 6.05 | 83500 | 0.2567 | 0.1761 |
| 0.2118 | 6.05 | 83600 | 0.2541 | 0.1711 |
| 0.1999 | 6.05 | 83700 | 0.2601 | 0.1769 |
| 0.1819 | 6.05 | 83800 | 0.2556 | 0.1697 |
| 0.1859 | 6.05 | 83900 | 0.2523 | 0.1704 |
| 0.1929 | 6.06 | 84000 | 0.2633 | 0.1747 |
| 0.1854 | 6.06 | 84100 | 0.2554 | 0.1733 |
| 0.2043 | 6.06 | 84200 | 0.2536 | 0.1747 |
| 0.2 | 6.06 | 84300 | 0.2499 | 0.1718 |
| 0.1986 | 6.06 | 84400 | 0.2446 | 0.1661 |
| 0.1899 | 6.06 | 84500 | 0.2540 | 0.1689 |
| 0.1881 | 6.06 | 84600 | 0.2614 | 0.1725 |
| 0.2018 | 6.06 | 84700 | 0.2581 | 0.1725 |
| 0.1952 | 6.06 | 84800 | 0.2632 | 0.1689 |
| 0.2048 | 6.06 | 84900 | 0.2575 | 0.1689 |
| 0.1951 | 6.07 | 85000 | 0.2557 | 0.1639 |
| 0.1912 | 6.07 | 85100 | 0.2527 | 0.1711 |
| 0.1871 | 6.07 | 85200 | 0.2535 | 0.1689 |
| 0.1907 | 6.07 | 85300 | 0.2565 | 0.1682 |
| 0.1899 | 6.07 | 85400 | 0.2565 | 0.1646 |
| 0.1939 | 6.07 | 85500 | 0.2434 | 0.1718 |
| 0.1936 | 6.07 | 85600 | 0.2602 | 0.1682 |
| 0.2073 | 6.07 | 85700 | 0.2537 | 0.1682 |
| 0.1944 | 6.07 | 85800 | 0.2580 | 0.1682 |
| 0.1908 | 6.07 | 85900 | 0.2621 | 0.1725 |
| 0.1985 | 6.08 | 86000 | 0.2652 | 0.1632 |
| 0.2576 | 6.08 | 86100 | 0.1991 | 0.1725 |
| 0.2994 | 6.08 | 86200 | 0.2014 | 0.1675 |
| 0.29 | 6.08 | 86300 | 0.2028 | 0.1675 |
| 0.2783 | 6.08 | 86400 | 0.2102 | 0.1689 |
| 0.3024 | 6.08 | 86500 | 0.2031 | 0.1704 |
| 0.2955 | 6.08 | 86600 | 0.2074 | 0.1661 |
| 0.3126 | 6.08 | 86700 | 0.2015 | 0.1733 |
| 0.2897 | 6.08 | 86800 | 0.2007 | 0.1689 |
| 0.2925 | 6.08 | 86900 | 0.2058 | 0.1661 |
| 0.2948 | 6.09 | 87000 | 0.2099 | 0.1697 |
| 0.2827 | 6.09 | 87100 | 0.2031 | 0.1682 |
| 0.3111 | 6.09 | 87200 | 0.2109 | 0.1725 |
| 0.2924 | 6.09 | 87300 | 0.2021 | 0.1733 |
| 0.2875 | 6.09 | 87400 | 0.2083 | 0.1718 |
| 0.2672 | 6.09 | 87500 | 0.2114 | 0.1646 |
| 0.279 | 6.09 | 87600 | 0.2024 | 0.1718 |
| 0.2979 | 6.09 | 87700 | 0.2097 | 0.1704 |
| 0.2697 | 6.09 | 87800 | 0.2103 | 0.1682 |
| 0.3038 | 6.09 | 87900 | 0.2075 | 0.1646 |
| 0.2784 | 6.1 | 88000 | 0.2082 | 0.1682 |
| 0.2839 | 6.1 | 88100 | 0.2103 | 0.1661 |
| 0.2868 | 6.1 | 88200 | 0.2059 | 0.1668 |
| 0.2753 | 6.1 | 88300 | 0.2048 | 0.1682 |
| 0.2866 | 6.1 | 88400 | 0.2018 | 0.1661 |
| 0.3049 | 6.1 | 88500 | 0.2017 | 0.1668 |
| 0.2969 | 6.1 | 88600 | 0.2037 | 0.1639 |
| 0.2828 | 6.1 | 88700 | 0.2024 | 0.1675 |
| 0.2888 | 6.1 | 88800 | 0.2062 | 0.1661 |
| 0.2857 | 6.1 | 88900 | 0.2070 | 0.1661 |
| 0.2774 | 6.11 | 89000 | 0.2028 | 0.1610 |
| 0.2759 | 6.11 | 89100 | 0.2079 | 0.1646 |
| 0.2809 | 6.11 | 89200 | 0.2041 | 0.1668 |
| 0.2755 | 6.11 | 89300 | 0.2085 | 0.1697 |
| 0.2752 | 6.11 | 89400 | 0.2063 | 0.1682 |
| 0.3058 | 6.11 | 89500 | 0.2040 | 0.1689 |
| 0.2948 | 6.11 | 89600 | 0.2032 | 0.1625 |
| 0.2973 | 6.11 | 89700 | 0.2087 | 0.1646 |
| 0.2646 | 6.11 | 89800 | 0.2074 | 0.1639 |
| 0.2907 | 6.11 | 89900 | 0.2007 | 0.1610 |
| 0.2919 | 6.12 | 90000 | 0.2056 | 0.1582 |
| 0.2914 | 6.12 | 90100 | 0.2050 | 0.1582 |
| 0.2869 | 6.12 | 90200 | 0.2040 | 0.1603 |
| 0.2707 | 6.12 | 90300 | 0.2010 | 0.1632 |
| 0.276 | 6.12 | 90400 | 0.2072 | 0.1668 |
| 0.2919 | 6.12 | 90500 | 0.2057 | 0.1711 |
| 0.2623 | 6.12 | 90600 | 0.1982 | 0.1625 |
| 0.2908 | 6.12 | 90700 | 0.2046 | 0.1697 |
| 0.2812 | 6.12 | 90800 | 0.2144 | 0.1625 |
| 0.2753 | 6.12 | 90900 | 0.2189 | 0.1653 |
| 0.2762 | 6.13 | 91000 | 0.2137 | 0.1646 |
| 0.2786 | 6.13 | 91100 | 0.2124 | 0.1661 |
| 0.2651 | 6.13 | 91200 | 0.2019 | 0.1646 |
| 0.2688 | 6.13 | 91300 | 0.2038 | 0.1740 |
| 0.2731 | 6.13 | 91400 | 0.1973 | 0.1718 |
| 0.2711 | 6.13 | 91500 | 0.2022 | 0.1725 |
| 0.2865 | 7.0 | 91600 | 0.1825 | 0.1718 |
| 0.3023 | 7.0 | 91700 | 0.1820 | 0.1675 |
| 0.2996 | 7.0 | 91800 | 0.1816 | 0.1675 |
| 0.2899 | 7.0 | 91900 | 0.1808 | 0.1711 |
| 0.2811 | 7.0 | 92000 | 0.1807 | 0.1668 |
| 0.276 | 7.01 | 92100 | 0.1809 | 0.1675 |
| 0.2987 | 7.01 | 92200 | 0.1802 | 0.1661 |
| 0.2814 | 7.01 | 92300 | 0.1806 | 0.1632 |
| 0.2729 | 7.01 | 92400 | 0.1802 | 0.1618 |
| 0.2757 | 7.01 | 92500 | 0.1804 | 0.1618 |
| 0.2843 | 7.01 | 92600 | 0.1804 | 0.1625 |
| 0.253 | 7.01 | 92700 | 0.1803 | 0.1589 |
| 0.2198 | 7.01 | 92800 | 0.2069 | 0.1632 |
| 0.2024 | 7.01 | 92900 | 0.2118 | 0.1618 |
| 0.2156 | 7.01 | 93000 | 0.2216 | 0.1639 |
| 0.1975 | 7.02 | 93100 | 0.2230 | 0.1639 |
| 0.1961 | 7.02 | 93200 | 0.2292 | 0.1661 |
| 0.1901 | 7.02 | 93300 | 0.2307 | 0.1653 |
| 0.1883 | 7.02 | 93400 | 0.2354 | 0.1646 |
| 0.1884 | 7.02 | 93500 | 0.2354 | 0.1653 |
| 0.2034 | 7.02 | 93600 | 0.2402 | 0.1653 |
| 0.1819 | 7.02 | 93700 | 0.2362 | 0.1625 |
| 0.1946 | 7.02 | 93800 | 0.2431 | 0.1639 |
| 0.1965 | 7.02 | 93900 | 0.2410 | 0.1661 |
| 0.2 | 7.02 | 94000 | 0.2404 | 0.1668 |
| 0.1872 | 7.03 | 94100 | 0.2334 | 0.1661 |
| 0.1857 | 7.03 | 94200 | 0.2357 | 0.1646 |
| 0.2034 | 7.03 | 94300 | 0.2396 | 0.1625 |
| 0.2067 | 7.03 | 94400 | 0.2407 | 0.1625 |
| 0.2067 | 7.03 | 94500 | 0.2393 | 0.1632 |
| 0.1911 | 7.03 | 94600 | 0.2402 | 0.1653 |
| 0.2007 | 7.03 | 94700 | 0.2399 | 0.1675 |
| 0.1903 | 7.03 | 94800 | 0.2442 | 0.1610 |
| 0.1902 | 7.03 | 94900 | 0.2436 | 0.1603 |
| 0.1896 | 7.03 | 95000 | 0.2479 | 0.1625 |
| 0.2001 | 7.04 | 95100 | 0.2437 | 0.1632 |
| 0.1845 | 7.04 | 95200 | 0.2444 | 0.1661 |
| 0.1997 | 7.04 | 95300 | 0.2486 | 0.1610 |
| 0.1912 | 7.04 | 95400 | 0.2467 | 0.1639 |
| 0.1994 | 7.04 | 95500 | 0.2412 | 0.1618 |
| 0.1902 | 7.04 | 95600 | 0.2485 | 0.1618 |
| 0.1855 | 7.04 | 95700 | 0.2466 | 0.1610 |
| 0.213 | 7.04 | 95800 | 0.2463 | 0.1653 |
| 0.1812 | 7.04 | 95900 | 0.2481 | 0.1603 |
| 0.1902 | 7.04 | 96000 | 0.2487 | 0.1589 |
| 0.2014 | 7.05 | 96100 | 0.2490 | 0.1653 |
| 0.1899 | 7.05 | 96200 | 0.2491 | 0.1646 |
| 0.1812 | 7.05 | 96300 | 0.2524 | 0.1639 |
| 0.1986 | 7.05 | 96400 | 0.2497 | 0.1632 |
| 0.1995 | 7.05 | 96500 | 0.2501 | 0.1639 |
| 0.2047 | 7.05 | 96600 | 0.2469 | 0.1625 |
| 0.1993 | 7.05 | 96700 | 0.2471 | 0.1610 |
| 0.1833 | 7.05 | 96800 | 0.2460 | 0.1618 |
| 0.1892 | 7.05 | 96900 | 0.2474 | 0.1625 |
| 0.1766 | 7.05 | 97000 | 0.2457 | 0.1625 |
| 0.2002 | 7.06 | 97100 | 0.2484 | 0.1596 |
| 0.189 | 7.06 | 97200 | 0.2457 | 0.1603 |
| 0.1958 | 7.06 | 97300 | 0.2450 | 0.1610 |
| 0.1962 | 7.06 | 97400 | 0.2424 | 0.1618 |
| 0.201 | 7.06 | 97500 | 0.2400 | 0.1632 |
| 0.1915 | 7.06 | 97600 | 0.2421 | 0.1639 |
| 0.1887 | 7.06 | 97700 | 0.2417 | 0.1639 |
| 0.2027 | 7.06 | 97800 | 0.2422 | 0.1646 |
| 0.192 | 7.06 | 97900 | 0.2447 | 0.1618 |
| 0.199 | 7.06 | 98000 | 0.2439 | 0.1618 |
| 0.1905 | 7.07 | 98100 | 0.2428 | 0.1618 |
| 0.1914 | 7.07 | 98200 | 0.2424 | 0.1618 |
| 0.1819 | 7.07 | 98300 | 0.2426 | 0.1610 |
| 0.1927 | 7.07 | 98400 | 0.2441 | 0.1603 |
| 0.194 | 7.07 | 98500 | 0.2454 | 0.1610 |
| 0.2013 | 7.07 | 98600 | 0.2429 | 0.1603 |
| 0.1904 | 7.07 | 98700 | 0.2442 | 0.1603 |
| 0.1915 | 7.07 | 98800 | 0.2443 | 0.1603 |
| 0.1809 | 7.07 | 98900 | 0.2438 | 0.1610 |
| 0.1977 | 7.07 | 99000 | 0.2447 | 0.1596 |
| 0.1893 | 7.08 | 99100 | 0.2462 | 0.1596 |
| 0.3181 | 7.08 | 99200 | 0.1927 | 0.1639 |
| 0.3076 | 7.08 | 99300 | 0.1867 | 0.1661 |
| 0.2971 | 7.08 | 99400 | 0.1874 | 0.1639 |
| 0.2919 | 7.08 | 99500 | 0.1879 | 0.1668 |
| 0.3001 | 7.08 | 99600 | 0.1885 | 0.1646 |
| 0.2837 | 7.08 | 99700 | 0.1887 | 0.1639 |
| 0.3188 | 7.08 | 99800 | 0.1885 | 0.1632 |
| 0.3076 | 7.08 | 99900 | 0.1884 | 0.1625 |
| 0.272 | 7.08 | 100000 | 0.1884 | 0.1618 |
### Framework versions
- Transformers 4.36.0
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
madhaviit/llama-2-7b-madhav-t1-v2 | madhaviit | "2023-12-13T19:50:54Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-12-13T19:44:21Z" | Entry not found |
ducha07/ASR-test | ducha07 | "2024-01-11T12:53:44Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"vi",
"dataset:ducha07/audio_HTV_thoisu",
"base_model:facebook/mms-1b-all",
"license:cc-by-nc-4.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2023-12-13T19:53:13Z" | ---
language:
- vi
license: cc-by-nc-4.0
base_model: facebook/mms-1b-all
tags:
- generated_from_trainer
datasets:
- ducha07/audio_HTV_thoisu
metrics:
- wer
model-index:
- name: ASR-test
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: HTV news
type: ducha07/audio_HTV_thoisu
metrics:
- name: Wer
type: wer
value: 0.2796665364074508
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ASR-test-1
This model is a fine-tuned version of [facebook/mms-1b-all](https://huggingface.co/facebook/mms-1b-all) on the HTV news dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6593
- Wer: 0.2797
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 4.8562 | 0.92 | 100 | 0.8316 | 0.4500 |
| 1.0777 | 1.83 | 200 | 0.6898 | 0.3899 |
| 0.98 | 2.75 | 300 | 0.6811 | 0.3740 |
| 0.8967 | 3.67 | 400 | 0.6332 | 0.3565 |
| 0.8965 | 4.59 | 500 | 0.6038 | 0.3517 |
| 0.8396 | 5.5 | 600 | 0.6040 | 0.3479 |
| 0.8137 | 6.42 | 700 | 0.5929 | 0.3408 |
| 0.8304 | 7.34 | 800 | 0.5911 | 0.3513 |
| 0.7894 | 8.26 | 900 | 0.6078 | 0.3357 |
| 0.7412 | 9.17 | 1000 | 0.6214 | 0.3230 |
| 0.7653 | 10.09 | 1100 | 0.5869 | 0.3444 |
| 0.7437 | 11.01 | 1200 | 0.5906 | 0.3213 |
| 0.7083 | 11.93 | 1300 | 0.5952 | 0.3139 |
| 0.7168 | 12.84 | 1400 | 0.5721 | 0.3267 |
| 0.7008 | 13.76 | 1500 | 0.5895 | 0.3177 |
| 0.6825 | 14.68 | 1600 | 0.5909 | 0.3098 |
| 0.6989 | 15.6 | 1700 | 0.5979 | 0.3673 |
| 0.6717 | 16.51 | 1800 | 0.5863 | 0.3077 |
| 0.6496 | 17.43 | 1900 | 0.5798 | 0.3043 |
| 0.6609 | 18.35 | 2000 | 0.5787 | 0.3555 |
| 0.628 | 19.27 | 2100 | 0.5889 | 0.3133 |
| 0.6322 | 20.18 | 2200 | 0.5913 | 0.3077 |
| 0.634 | 21.1 | 2300 | 0.5769 | 0.3193 |
| 0.6172 | 22.02 | 2400 | 0.5731 | 0.3005 |
| 0.6043 | 22.94 | 2500 | 0.5820 | 0.3075 |
| 0.6051 | 23.85 | 2600 | 0.5831 | 0.3435 |
| 0.5865 | 24.77 | 2700 | 0.5790 | 0.3029 |
| 0.5806 | 25.69 | 2800 | 0.5945 | 0.3053 |
| 0.5901 | 26.61 | 2900 | 0.5780 | 0.3126 |
| 0.5769 | 27.52 | 3000 | 0.5732 | 0.2963 |
| 0.5539 | 28.44 | 3100 | 0.5837 | 0.2950 |
| 0.5799 | 29.36 | 3200 | 0.5835 | 0.3178 |
| 0.5518 | 30.28 | 3300 | 0.5941 | 0.2943 |
| 0.549 | 31.19 | 3400 | 0.5960 | 0.2979 |
| 0.5612 | 32.11 | 3500 | 0.5747 | 0.3167 |
| 0.5411 | 33.03 | 3600 | 0.5855 | 0.2978 |
| 0.536 | 33.94 | 3700 | 0.5720 | 0.2944 |
| 0.5329 | 34.86 | 3800 | 0.5998 | 0.3186 |
| 0.5185 | 35.78 | 3900 | 0.5936 | 0.2884 |
| 0.5186 | 36.7 | 4000 | 0.5773 | 0.2901 |
| 0.5027 | 37.61 | 4100 | 0.5969 | 0.3264 |
| 0.52 | 38.53 | 4200 | 0.6184 | 0.2939 |
| 0.4992 | 39.45 | 4300 | 0.5887 | 0.2943 |
| 0.5064 | 40.37 | 4400 | 0.5814 | 0.2966 |
| 0.4928 | 41.28 | 4500 | 0.6128 | 0.2902 |
| 0.508 | 42.2 | 4600 | 0.5943 | 0.2923 |
| 0.4887 | 43.12 | 4700 | 0.6100 | 0.3039 |
| 0.4872 | 44.04 | 4800 | 0.6044 | 0.2875 |
| 0.4711 | 44.95 | 4900 | 0.5961 | 0.2974 |
| 0.4813 | 45.87 | 5000 | 0.6022 | 0.2945 |
| 0.4818 | 46.79 | 5100 | 0.6199 | 0.2898 |
| 0.4492 | 47.71 | 5200 | 0.6161 | 0.2943 |
| 0.4715 | 48.62 | 5300 | 0.6038 | 0.2838 |
| 0.4601 | 49.54 | 5400 | 0.6223 | 0.2829 |
| 0.4432 | 50.46 | 5500 | 0.6058 | 0.2965 |
| 0.4419 | 51.38 | 5600 | 0.6134 | 0.2917 |
| 0.4564 | 52.29 | 5700 | 0.6124 | 0.2857 |
| 0.4349 | 53.21 | 5800 | 0.6229 | 0.2877 |
| 0.4358 | 54.13 | 5900 | 0.6095 | 0.2898 |
| 0.4432 | 55.05 | 6000 | 0.6365 | 0.2881 |
| 0.4277 | 55.96 | 6100 | 0.6169 | 0.2870 |
| 0.4397 | 56.88 | 6200 | 0.6174 | 0.2849 |
| 0.4245 | 57.8 | 6300 | 0.6340 | 0.2858 |
| 0.4203 | 58.72 | 6400 | 0.6321 | 0.2909 |
| 0.4112 | 59.63 | 6500 | 0.6243 | 0.2866 |
| 0.4244 | 60.55 | 6600 | 0.6318 | 0.2775 |
| 0.4119 | 61.47 | 6700 | 0.6215 | 0.2798 |
| 0.403 | 62.39 | 6800 | 0.6213 | 0.2829 |
| 0.4158 | 63.3 | 6900 | 0.6451 | 0.2795 |
| 0.3997 | 64.22 | 7000 | 0.6317 | 0.2854 |
| 0.4006 | 65.14 | 7100 | 0.6329 | 0.2846 |
| 0.4051 | 66.06 | 7200 | 0.6318 | 0.2834 |
| 0.3953 | 66.97 | 7300 | 0.6442 | 0.2855 |
| 0.4119 | 67.89 | 7400 | 0.6345 | 0.2893 |
| 0.3976 | 68.81 | 7500 | 0.6361 | 0.2798 |
| 0.3965 | 69.72 | 7600 | 0.6355 | 0.2853 |
| 0.3957 | 70.64 | 7700 | 0.6457 | 0.2814 |
| 0.3837 | 71.56 | 7800 | 0.6396 | 0.2855 |
| 0.3893 | 72.48 | 7900 | 0.6424 | 0.2842 |
| 0.3816 | 73.39 | 8000 | 0.6496 | 0.2778 |
| 0.3855 | 74.31 | 8100 | 0.6427 | 0.2881 |
| 0.3767 | 75.23 | 8200 | 0.6394 | 0.2858 |
| 0.3747 | 76.15 | 8300 | 0.6513 | 0.2844 |
| 0.3829 | 77.06 | 8400 | 0.6602 | 0.2775 |
| 0.3721 | 77.98 | 8500 | 0.6427 | 0.2825 |
| 0.3708 | 78.9 | 8600 | 0.6507 | 0.2847 |
| 0.3767 | 79.82 | 8700 | 0.6518 | 0.2816 |
| 0.3655 | 80.73 | 8800 | 0.6597 | 0.2802 |
| 0.3614 | 81.65 | 8900 | 0.6542 | 0.2781 |
| 0.3629 | 82.57 | 9000 | 0.6520 | 0.2782 |
| 0.3621 | 83.49 | 9100 | 0.6501 | 0.2797 |
| 0.3616 | 84.4 | 9200 | 0.6528 | 0.2777 |
| 0.3519 | 85.32 | 9300 | 0.6549 | 0.2798 |
| 0.3572 | 86.24 | 9400 | 0.6541 | 0.2789 |
| 0.3585 | 87.16 | 9500 | 0.6497 | 0.2778 |
| 0.3531 | 88.07 | 9600 | 0.6523 | 0.2781 |
| 0.3586 | 88.99 | 9700 | 0.6578 | 0.2789 |
| 0.3463 | 89.91 | 9800 | 0.6565 | 0.2816 |
| 0.3508 | 90.83 | 9900 | 0.6559 | 0.2797 |
| 0.3513 | 91.74 | 10000 | 0.6611 | 0.2794 |
| 0.3425 | 92.66 | 10100 | 0.6538 | 0.2804 |
| 0.3596 | 93.58 | 10200 | 0.6639 | 0.2808 |
| 0.3632 | 94.5 | 10300 | 0.6561 | 0.2789 |
| 0.348 | 95.41 | 10400 | 0.6556 | 0.2786 |
| 0.3514 | 96.33 | 10500 | 0.6575 | 0.2791 |
| 0.3499 | 97.25 | 10600 | 0.6573 | 0.2795 |
| 0.3353 | 98.17 | 10700 | 0.6589 | 0.2797 |
| 0.3468 | 99.08 | 10800 | 0.6589 | 0.2799 |
| 0.3571 | 100.0 | 10900 | 0.6593 | 0.2797 |
### Framework versions
- Transformers 4.37.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
hkivancoral/smids_3x_beit_base_sgd_0001_fold5 | hkivancoral | "2023-12-13T20:41:31Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"beit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/beit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2023-12-13T19:53:48Z" | ---
license: apache-2.0
base_model: microsoft/beit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: smids_3x_beit_base_sgd_0001_fold5
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.7833333333333333
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_3x_beit_base_sgd_0001_fold5
This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5470
- Accuracy: 0.7833
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.227 | 1.0 | 225 | 1.2590 | 0.3417 |
| 1.1831 | 2.0 | 450 | 1.2003 | 0.3717 |
| 1.084 | 3.0 | 675 | 1.1460 | 0.4017 |
| 1.0157 | 4.0 | 900 | 1.0941 | 0.42 |
| 0.9735 | 5.0 | 1125 | 1.0438 | 0.455 |
| 0.9436 | 6.0 | 1350 | 0.9975 | 0.4917 |
| 0.9179 | 7.0 | 1575 | 0.9541 | 0.5333 |
| 0.8784 | 8.0 | 1800 | 0.9154 | 0.5717 |
| 0.884 | 9.0 | 2025 | 0.8784 | 0.595 |
| 0.8367 | 10.0 | 2250 | 0.8446 | 0.6217 |
| 0.8299 | 11.0 | 2475 | 0.8151 | 0.6483 |
| 0.7845 | 12.0 | 2700 | 0.7879 | 0.6617 |
| 0.7765 | 13.0 | 2925 | 0.7642 | 0.6667 |
| 0.7469 | 14.0 | 3150 | 0.7428 | 0.6783 |
| 0.7257 | 15.0 | 3375 | 0.7247 | 0.69 |
| 0.6997 | 16.0 | 3600 | 0.7073 | 0.7 |
| 0.7213 | 17.0 | 3825 | 0.6920 | 0.7083 |
| 0.698 | 18.0 | 4050 | 0.6780 | 0.7183 |
| 0.7064 | 19.0 | 4275 | 0.6649 | 0.7217 |
| 0.6988 | 20.0 | 4500 | 0.6533 | 0.735 |
| 0.6396 | 21.0 | 4725 | 0.6426 | 0.7383 |
| 0.6558 | 22.0 | 4950 | 0.6328 | 0.7483 |
| 0.6628 | 23.0 | 5175 | 0.6239 | 0.75 |
| 0.6417 | 24.0 | 5400 | 0.6165 | 0.7533 |
| 0.6414 | 25.0 | 5625 | 0.6079 | 0.7517 |
| 0.6773 | 26.0 | 5850 | 0.6018 | 0.7567 |
| 0.662 | 27.0 | 6075 | 0.5968 | 0.7583 |
| 0.6119 | 28.0 | 6300 | 0.5913 | 0.765 |
| 0.6058 | 29.0 | 6525 | 0.5864 | 0.765 |
| 0.5469 | 30.0 | 6750 | 0.5816 | 0.7683 |
| 0.6085 | 31.0 | 6975 | 0.5777 | 0.7667 |
| 0.557 | 32.0 | 7200 | 0.5744 | 0.7667 |
| 0.5975 | 33.0 | 7425 | 0.5708 | 0.7683 |
| 0.5747 | 34.0 | 7650 | 0.5675 | 0.7717 |
| 0.6075 | 35.0 | 7875 | 0.5645 | 0.7717 |
| 0.5661 | 36.0 | 8100 | 0.5618 | 0.7733 |
| 0.5862 | 37.0 | 8325 | 0.5597 | 0.7733 |
| 0.5867 | 38.0 | 8550 | 0.5581 | 0.775 |
| 0.5414 | 39.0 | 8775 | 0.5562 | 0.7767 |
| 0.5431 | 40.0 | 9000 | 0.5546 | 0.775 |
| 0.5693 | 41.0 | 9225 | 0.5532 | 0.7767 |
| 0.5499 | 42.0 | 9450 | 0.5518 | 0.7783 |
| 0.5959 | 43.0 | 9675 | 0.5505 | 0.78 |
| 0.6402 | 44.0 | 9900 | 0.5495 | 0.78 |
| 0.5702 | 45.0 | 10125 | 0.5486 | 0.7817 |
| 0.5765 | 46.0 | 10350 | 0.5481 | 0.7833 |
| 0.6208 | 47.0 | 10575 | 0.5477 | 0.7833 |
| 0.5613 | 48.0 | 10800 | 0.5473 | 0.7833 |
| 0.6326 | 49.0 | 11025 | 0.5471 | 0.7833 |
| 0.5777 | 50.0 | 11250 | 0.5470 | 0.7833 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
yutingg/essay_clarity | yutingg | "2023-12-13T19:56:24Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-12-13T19:56:24Z" | Entry not found |
hkivancoral/smids_3x_beit_base_rms_001_fold5 | hkivancoral | "2023-12-13T20:44:51Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"beit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/beit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2023-12-13T19:57:57Z" | ---
license: apache-2.0
base_model: microsoft/beit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: smids_3x_beit_base_rms_001_fold5
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8216666666666667
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_3x_beit_base_rms_001_fold5
This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1444
- Accuracy: 0.8217
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.9503 | 1.0 | 225 | 0.9116 | 0.5117 |
| 0.9706 | 2.0 | 450 | 0.9826 | 0.46 |
| 0.8 | 3.0 | 675 | 0.8216 | 0.55 |
| 0.7869 | 4.0 | 900 | 0.7274 | 0.6417 |
| 0.7386 | 5.0 | 1125 | 0.7210 | 0.65 |
| 0.6956 | 6.0 | 1350 | 0.8161 | 0.6183 |
| 0.8586 | 7.0 | 1575 | 0.7427 | 0.6283 |
| 0.6974 | 8.0 | 1800 | 0.7391 | 0.6467 |
| 0.6497 | 9.0 | 2025 | 0.6781 | 0.665 |
| 0.665 | 10.0 | 2250 | 0.6784 | 0.69 |
| 0.6749 | 11.0 | 2475 | 0.6355 | 0.7083 |
| 0.6727 | 12.0 | 2700 | 0.6116 | 0.7083 |
| 0.6759 | 13.0 | 2925 | 0.6229 | 0.715 |
| 0.6034 | 14.0 | 3150 | 0.6562 | 0.685 |
| 0.5372 | 15.0 | 3375 | 0.5788 | 0.755 |
| 0.539 | 16.0 | 3600 | 0.5524 | 0.7583 |
| 0.5144 | 17.0 | 3825 | 0.5824 | 0.7483 |
| 0.4796 | 18.0 | 4050 | 0.5455 | 0.7617 |
| 0.5096 | 19.0 | 4275 | 0.5692 | 0.765 |
| 0.4664 | 20.0 | 4500 | 0.5893 | 0.7533 |
| 0.3623 | 21.0 | 4725 | 0.5578 | 0.745 |
| 0.3075 | 22.0 | 4950 | 0.5688 | 0.7867 |
| 0.3806 | 23.0 | 5175 | 0.5983 | 0.7633 |
| 0.4403 | 24.0 | 5400 | 0.4856 | 0.8017 |
| 0.3263 | 25.0 | 5625 | 0.4951 | 0.8083 |
| 0.4298 | 26.0 | 5850 | 0.5186 | 0.8067 |
| 0.3696 | 27.0 | 6075 | 0.5017 | 0.8017 |
| 0.3505 | 28.0 | 6300 | 0.5055 | 0.805 |
| 0.2809 | 29.0 | 6525 | 0.5401 | 0.81 |
| 0.2639 | 30.0 | 6750 | 0.5378 | 0.8083 |
| 0.1827 | 31.0 | 6975 | 0.5714 | 0.815 |
| 0.2309 | 32.0 | 7200 | 0.5483 | 0.8167 |
| 0.2167 | 33.0 | 7425 | 0.5706 | 0.7967 |
| 0.1201 | 34.0 | 7650 | 0.6703 | 0.8117 |
| 0.1274 | 35.0 | 7875 | 0.7662 | 0.7917 |
| 0.1115 | 36.0 | 8100 | 0.6767 | 0.8183 |
| 0.1604 | 37.0 | 8325 | 0.8509 | 0.8083 |
| 0.0668 | 38.0 | 8550 | 0.7497 | 0.8233 |
| 0.1178 | 39.0 | 8775 | 0.8497 | 0.8067 |
| 0.0788 | 40.0 | 9000 | 0.9494 | 0.8033 |
| 0.0775 | 41.0 | 9225 | 0.9252 | 0.81 |
| 0.1033 | 42.0 | 9450 | 0.9696 | 0.8217 |
| 0.0903 | 43.0 | 9675 | 0.9856 | 0.8133 |
| 0.037 | 44.0 | 9900 | 1.0200 | 0.81 |
| 0.019 | 45.0 | 10125 | 1.1824 | 0.8067 |
| 0.0484 | 46.0 | 10350 | 1.0838 | 0.8183 |
| 0.0259 | 47.0 | 10575 | 1.1218 | 0.8083 |
| 0.0077 | 48.0 | 10800 | 1.1617 | 0.8133 |
| 0.0106 | 49.0 | 11025 | 1.1590 | 0.8117 |
| 0.0158 | 50.0 | 11250 | 1.1444 | 0.8217 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
linqus/bert-finetuned-ner | linqus | "2024-02-02T22:32:21Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"base_model:bert-base-cased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2023-12-13T19:59:28Z" | ---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: validation
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9328715157512782
- name: Recall
type: recall
value: 0.9518680578929654
- name: F1
type: f1
value: 0.9422740524781341
- name: Accuracy
type: accuracy
value: 0.9866515570730559
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0670
- Precision: 0.9329
- Recall: 0.9519
- F1: 0.9423
- Accuracy: 0.9867
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.025 | 1.0 | 1756 | 0.0677 | 0.9269 | 0.9472 | 0.9369 | 0.9848 |
| 0.0227 | 2.0 | 3512 | 0.0681 | 0.9302 | 0.9482 | 0.9391 | 0.9857 |
| 0.015 | 3.0 | 5268 | 0.0670 | 0.9329 | 0.9519 | 0.9423 | 0.9867 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
magnifi/llama-2-classifier-v3 | magnifi | "2023-12-13T20:05:10Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-12-13T20:01:10Z" | Entry not found |
ManthanKulakarni/phi_finetuned_v1 | ManthanKulakarni | "2023-12-13T20:02:16Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-12-13T20:02:16Z" | Entry not found |
magnifi/zephyr-classifier-v3-all | magnifi | "2023-12-13T20:06:41Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-12-13T20:02:39Z" | Entry not found |
SheepVipPro/Sheep | SheepVipPro | "2023-12-13T20:15:34Z" | 0 | 0 | null | [
"license:other",
"region:us"
] | null | "2023-12-13T20:15:34Z" | ---
license: other
license_name: sheep
license_link: LICENSE
---
|