modelId
stringlengths 5
122
| author
stringlengths 2
42
| last_modified
unknown | downloads
int64 0
679M
| likes
int64 0
11k
| library_name
stringclasses 256
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 48
values | createdAt
unknown | card
stringlengths 1
901k
|
---|---|---|---|---|---|---|---|---|---|
pinot/wav2vec2-conformer-large-cv13 | pinot | "2023-12-15T15:38:55Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice_13_0",
"base_model:facebook/wav2vec2-conformer-rel-pos-large",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2023-12-13T14:12:25Z" | ---
license: apache-2.0
base_model: facebook/wav2vec2-conformer-rel-pos-large
tags:
- generated_from_trainer
datasets:
- common_voice_13_0
metrics:
- wer
model-index:
- name: wav2vec2-conformer-large-cv13
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: common_voice_13_0
type: common_voice_13_0
config: ja
split: test[:10%]
args: ja
metrics:
- name: Wer
type: wer
value: 0.961053330382828
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-conformer-large-cv13
This model is a fine-tuned version of [facebook/wav2vec2-conformer-rel-pos-large](https://huggingface.co/facebook/wav2vec2-conformer-rel-pos-large) on the common_voice_13_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 5.3295
- Wer: 0.9611
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 8.4756 | 1.0 | 715 | 5.7619 | 1.0 |
| 5.6554 | 2.0 | 1430 | 5.7260 | 1.0 |
| 5.4654 | 3.0 | 2146 | 5.5588 | 0.9993 |
| 5.421 | 4.0 | 2861 | 5.5970 | 0.9918 |
| 5.3141 | 5.0 | 3577 | 5.4359 | 0.9794 |
| 5.2603 | 6.0 | 4292 | 5.4187 | 0.9792 |
| 5.1834 | 7.0 | 5008 | 5.3865 | 0.9785 |
| 5.1195 | 8.0 | 5723 | 5.3875 | 0.9661 |
| 5.0788 | 9.0 | 6438 | 5.3399 | 0.9668 |
| 4.9988 | 9.99 | 7150 | 5.3295 | 0.9611 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
abraxasu/GPT2_large_summarization_peft | abraxasu | "2023-12-13T14:12:56Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-12-13T14:12:56Z" | Entry not found |
alinerodrigues/wav2vec2-large-xlsr-mecita-coraa-portuguese-2-all-clean-01 | alinerodrigues | "2023-12-13T19:57:49Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2023-12-13T14:14:37Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: wav2vec2-large-xlsr-mecita-coraa-portuguese-2-all-clean-01
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-mecita-coraa-portuguese-2-all-clean-01
This model is a fine-tuned version of [Edresson/wav2vec2-large-xlsr-coraa-portuguese](https://huggingface.co/Edresson/wav2vec2-large-xlsr-coraa-portuguese) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1531
- Wer: 0.0951
- Cer: 0.0280
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 27.9637 | 1.0 | 67 | 7.7844 | 0.9680 | 0.9846 |
| 10.0192 | 2.0 | 134 | 6.8622 | 0.9613 | 0.9448 |
| 4.9566 | 3.0 | 201 | 6.6997 | 0.9617 | 0.9567 |
| 4.9566 | 4.0 | 268 | 6.8259 | 0.9662 | 0.9720 |
| 4.4483 | 5.0 | 335 | 6.8626 | 0.9680 | 0.9780 |
| 4.2073 | 6.0 | 402 | 5.6960 | 0.9732 | 0.9821 |
| 4.2073 | 7.0 | 469 | 6.6086 | 0.9728 | 0.9818 |
| 4.077 | 8.0 | 536 | 6.2581 | 0.9714 | 0.9821 |
| 3.7594 | 9.0 | 603 | 6.0266 | 0.9718 | 0.9827 |
| 3.7594 | 10.0 | 670 | 5.7485 | 0.9756 | 0.9795 |
| 3.7788 | 11.0 | 737 | 5.0587 | 0.9714 | 0.9816 |
| 3.6071 | 12.0 | 804 | 4.7627 | 0.9714 | 0.9827 |
| 3.6071 | 13.0 | 871 | 3.5214 | 0.9725 | 0.9820 |
| 3.3363 | 14.0 | 938 | 2.9877 | 0.9753 | 0.9798 |
| 3.1039 | 15.0 | 1005 | 2.9151 | 0.9805 | 0.9840 |
| 3.1039 | 16.0 | 1072 | 2.8843 | 0.9847 | 0.9909 |
| 2.9064 | 17.0 | 1139 | 2.8720 | 0.9857 | 0.9846 |
| 2.8703 | 18.0 | 1206 | 2.7859 | 0.9972 | 0.9364 |
| 2.8703 | 19.0 | 1273 | 2.5571 | 1.0 | 0.8941 |
| 2.6946 | 20.0 | 1340 | 1.6485 | 0.9944 | 0.4890 |
| 1.8247 | 21.0 | 1407 | 0.7422 | 0.7133 | 0.1540 |
| 1.8247 | 22.0 | 1474 | 0.4440 | 0.3640 | 0.0807 |
| 0.8703 | 23.0 | 1541 | 0.3507 | 0.2090 | 0.0564 |
| 0.5944 | 24.0 | 1608 | 0.3083 | 0.1881 | 0.0500 |
| 0.5944 | 25.0 | 1675 | 0.2676 | 0.1721 | 0.0456 |
| 0.4905 | 26.0 | 1742 | 0.2580 | 0.1536 | 0.0428 |
| 0.4306 | 27.0 | 1809 | 0.2281 | 0.1428 | 0.0391 |
| 0.4306 | 28.0 | 1876 | 0.2203 | 0.1299 | 0.0364 |
| 0.356 | 29.0 | 1943 | 0.2066 | 0.1250 | 0.0352 |
| 0.3299 | 30.0 | 2010 | 0.2104 | 0.1177 | 0.0335 |
| 0.3299 | 31.0 | 2077 | 0.2006 | 0.1181 | 0.0330 |
| 0.3279 | 32.0 | 2144 | 0.1907 | 0.1181 | 0.0335 |
| 0.293 | 33.0 | 2211 | 0.1819 | 0.1135 | 0.0322 |
| 0.293 | 34.0 | 2278 | 0.1876 | 0.1097 | 0.0320 |
| 0.2869 | 35.0 | 2345 | 0.1862 | 0.1083 | 0.0311 |
| 0.2771 | 36.0 | 2412 | 0.1806 | 0.1160 | 0.0323 |
| 0.2771 | 37.0 | 2479 | 0.1757 | 0.1087 | 0.0311 |
| 0.2668 | 38.0 | 2546 | 0.1799 | 0.1052 | 0.0308 |
| 0.2394 | 39.0 | 2613 | 0.1840 | 0.1059 | 0.0313 |
| 0.2394 | 40.0 | 2680 | 0.1826 | 0.1031 | 0.0303 |
| 0.2269 | 41.0 | 2747 | 0.1759 | 0.1059 | 0.0312 |
| 0.2153 | 42.0 | 2814 | 0.1822 | 0.1024 | 0.0308 |
| 0.2153 | 43.0 | 2881 | 0.1667 | 0.1021 | 0.0305 |
| 0.238 | 44.0 | 2948 | 0.1757 | 0.1021 | 0.0308 |
| 0.2251 | 45.0 | 3015 | 0.1704 | 0.1031 | 0.0306 |
| 0.2251 | 46.0 | 3082 | 0.1809 | 0.1000 | 0.0299 |
| 0.2196 | 47.0 | 3149 | 0.1676 | 0.1000 | 0.0307 |
| 0.1932 | 48.0 | 3216 | 0.1658 | 0.1007 | 0.0300 |
| 0.1932 | 49.0 | 3283 | 0.1652 | 0.1000 | 0.0299 |
| 0.199 | 50.0 | 3350 | 0.1720 | 0.0993 | 0.0303 |
| 0.199 | 51.0 | 3417 | 0.1619 | 0.1021 | 0.0305 |
| 0.199 | 52.0 | 3484 | 0.1642 | 0.0986 | 0.0295 |
| 0.1747 | 53.0 | 3551 | 0.1655 | 0.1000 | 0.0299 |
| 0.1785 | 54.0 | 3618 | 0.1697 | 0.0965 | 0.0292 |
| 0.1785 | 55.0 | 3685 | 0.1605 | 0.0993 | 0.0294 |
| 0.1734 | 56.0 | 3752 | 0.1693 | 0.0979 | 0.0293 |
| 0.1813 | 57.0 | 3819 | 0.1658 | 0.0968 | 0.0292 |
| 0.1813 | 58.0 | 3886 | 0.1638 | 0.0996 | 0.0298 |
| 0.1718 | 59.0 | 3953 | 0.1705 | 0.0961 | 0.0290 |
| 0.1646 | 60.0 | 4020 | 0.1678 | 0.0965 | 0.0286 |
| 0.1646 | 61.0 | 4087 | 0.1647 | 0.0989 | 0.0288 |
| 0.1706 | 62.0 | 4154 | 0.1598 | 0.1010 | 0.0291 |
| 0.1559 | 63.0 | 4221 | 0.1555 | 0.0982 | 0.0288 |
| 0.1559 | 64.0 | 4288 | 0.1622 | 0.0965 | 0.0285 |
| 0.171 | 65.0 | 4355 | 0.1678 | 0.0940 | 0.0288 |
| 0.1655 | 66.0 | 4422 | 0.1643 | 0.0913 | 0.0281 |
| 0.1655 | 67.0 | 4489 | 0.1618 | 0.0947 | 0.0281 |
| 0.1628 | 68.0 | 4556 | 0.1587 | 0.0947 | 0.0283 |
| 0.149 | 69.0 | 4623 | 0.1614 | 0.0954 | 0.0281 |
| 0.149 | 70.0 | 4690 | 0.1636 | 0.0951 | 0.0280 |
| 0.1531 | 71.0 | 4757 | 0.1584 | 0.0989 | 0.0284 |
| 0.1677 | 72.0 | 4824 | 0.1638 | 0.0958 | 0.0284 |
| 0.1677 | 73.0 | 4891 | 0.1608 | 0.0947 | 0.0277 |
| 0.153 | 74.0 | 4958 | 0.1579 | 0.0951 | 0.0276 |
| 0.1464 | 75.0 | 5025 | 0.1633 | 0.0961 | 0.0280 |
| 0.1464 | 76.0 | 5092 | 0.1561 | 0.0958 | 0.0277 |
| 0.1533 | 77.0 | 5159 | 0.1554 | 0.0944 | 0.0279 |
| 0.1487 | 78.0 | 5226 | 0.1617 | 0.0930 | 0.0279 |
| 0.1487 | 79.0 | 5293 | 0.1574 | 0.0927 | 0.0274 |
| 0.1492 | 80.0 | 5360 | 0.1531 | 0.0951 | 0.0280 |
| 0.151 | 81.0 | 5427 | 0.1632 | 0.0954 | 0.0279 |
| 0.151 | 82.0 | 5494 | 0.1613 | 0.0972 | 0.0283 |
| 0.151 | 83.0 | 5561 | 0.1581 | 0.0968 | 0.0281 |
| 0.1398 | 84.0 | 5628 | 0.1569 | 0.0965 | 0.0278 |
| 0.1398 | 85.0 | 5695 | 0.1586 | 0.0968 | 0.0283 |
| 0.1344 | 86.0 | 5762 | 0.1595 | 0.0951 | 0.0280 |
| 0.1367 | 87.0 | 5829 | 0.1594 | 0.0937 | 0.0277 |
| 0.1367 | 88.0 | 5896 | 0.1583 | 0.0954 | 0.0282 |
| 0.1543 | 89.0 | 5963 | 0.1604 | 0.0968 | 0.0281 |
| 0.134 | 90.0 | 6030 | 0.1607 | 0.0923 | 0.0274 |
| 0.134 | 91.0 | 6097 | 0.1578 | 0.0944 | 0.0278 |
| 0.1498 | 92.0 | 6164 | 0.1595 | 0.0951 | 0.0280 |
| 0.133 | 93.0 | 6231 | 0.1557 | 0.0968 | 0.0282 |
| 0.133 | 94.0 | 6298 | 0.1598 | 0.0951 | 0.0277 |
| 0.1487 | 95.0 | 6365 | 0.1576 | 0.0940 | 0.0278 |
| 0.1343 | 96.0 | 6432 | 0.1558 | 0.0944 | 0.0280 |
| 0.1343 | 97.0 | 6499 | 0.1561 | 0.0951 | 0.0280 |
| 0.132 | 98.0 | 6566 | 0.1570 | 0.0933 | 0.0276 |
| 0.1416 | 99.0 | 6633 | 0.1574 | 0.0930 | 0.0274 |
| 0.13 | 100.0 | 6700 | 0.1574 | 0.0930 | 0.0275 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.13.3
|
Dimon-ton/food_classifier | Dimon-ton | "2023-12-13T14:47:21Z" | 0 | 0 | transformers | [
"transformers",
"tf",
"vit",
"image-classification",
"generated_from_keras_callback",
"base_model:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2023-12-13T14:16:29Z" | ---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_keras_callback
model-index:
- name: Dimon-ton/food_classifier
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Dimon-ton/food_classifier
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3617
- Validation Loss: 0.3222
- Train Accuracy: 0.926
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 20000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 2.7251 | 1.6243 | 0.821 | 0 |
| 1.2000 | 0.8153 | 0.904 | 1 |
| 0.6925 | 0.5191 | 0.907 | 2 |
| 0.4992 | 0.3969 | 0.916 | 3 |
| 0.3617 | 0.3222 | 0.926 | 4 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.14.0
- Datasets 2.15.0
- Tokenizers 0.15.0
|
tanooki426/MarkusVaughn | tanooki426 | "2023-12-13T14:20:50Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2023-12-13T14:17:42Z" | ---
license: openrail
---
|
bfish15/Fine_Tune_LLaMA_Identification | bfish15 | "2023-12-13T14:18:10Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2023-12-13T14:18:10Z" | ---
license: apache-2.0
---
|
Vital65/kijin | Vital65 | "2023-12-13T14:22:52Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2023-12-13T14:21:25Z" | ---
license: openrail
---
|
salmon54561/Mephy_the_skunkgirl | salmon54561 | "2023-12-14T11:22:25Z" | 0 | 0 | null | [
"tensorboard",
"region:us"
] | null | "2023-12-13T14:28:05Z" | ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6579bdeb1b1c5bff9be9ed39/DbG_KKiUatclU2wTVE4HC.png)
サイバーエージェントさんが出しているオープンソースの言語モデルcalm2-7bをLoRAファインチューニングしました。
男勝りな口調で話すスカンク娘、メフィーです。R-18なのでご注意ください。※オナラや臭液など、特殊なフェチ要素があります。text-generation-webuiでの使用を想定しています。
text-generation-webuiのlorasフォルダにフォルダごと入れてください。
WebUIの使い方をよくわかってないので、画像のように設定していただいたら上手く動作すると思います。
注意点として、ParametersタブのGenerationのmax_new_tokensの値を256程度に、Custom stopping stringsに"\n"を入力してください。
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6579bdeb1b1c5bff9be9ed39/TQNejUvrjPUZOt39PXpkO.png)
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6579bdeb1b1c5bff9be9ed39/F1IvE2XYfGZvNqZvTLm0c.png)
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6579bdeb1b1c5bff9be9ed39/zK91RdeD49FWjdRUAliwV.png)
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6579bdeb1b1c5bff9be9ed39/Yfqw7JGAUBLJszgDXo_TF.png)
|
isp-uv-es/DTACS | isp-uv-es | "2024-05-23T11:05:47Z" | 0 | 0 | null | [
"remote sensing",
"sentinel2",
"onboard",
"image-segmentation",
"license:cc-by-nc-4.0",
"region:us"
] | image-segmentation | "2023-12-13T14:28:28Z" | ---
license: cc-by-nc-4.0
pipeline_tag: image-segmentation
tags:
- remote sensing
- sentinel2
- onboard
---
# DTACS trained models
This repository contains the trained models of the publication:
> Mateo-García, G., Aybar, C., Acciarini, G., Růžička, V., Meoni, G., Longépé, N., and Gómez-Chova, L. (2023). Onboard Cloud Detection and Atmospheric Correction with Deep Learning Emulators. IGARSS 2023 - 2023 IEEE International Geoscience and Remote Sensing Symposium, 1875–1878. DOI: [10.1109/IGARSS52108.2023.10282605](https://doi.org/10.1109/IGARSS52108.2023.10282605)
<center>
<img src="./figure02.png" width="40%">
</center>
For surface reflectance (SR) estimation we include the models:
* **DTACS S2 bands** SR model with all the S2 bands. `DTACS_SR_sentinel2.pt`
* **DTACS phi-sat II bands** SR model with overlapping bands of Phi-Sat II and Sentinel-2. `DTACS_SR_phisat2.pt`
* **DTACS Proba-V bands** SR model with Blue, Red, NIR and SWIR bands. `DTACS_SR_probav.pt`
* **DTACS PlanetScope bands** SR model with Blue, Green, Red and NIR bands. `DTACS_SR_planetscope.pt`
For cloud detection (CD):
* **DTACS S2 bands**: CD model with all the S2 bands. `DTACS_CLOUD_ALL.pt`
* **DTACS RGBNIR bands**: CD model with Red, Green, Blue and NIR bands. `DTACS_CLOUD_RGBNIR.pt`
Examples of use here: https://github.com/spaceml-org/DTACSNet
## Licence
<img src="https://mirrors.creativecommons.org/presskit/buttons/88x31/png/by-nc.png" alt="licence" width="60"/>
All pre-trained models in this repository are released under a [Creative Commons non-commercial licence](https://creativecommons.org/licenses/by-nc/4.0/legalcode.txt)
|
amphion/hifigan_speech_bigdata | amphion | "2023-12-21T10:20:29Z" | 0 | 3 | null | [
"arxiv:2311.14957",
"license:mit",
"region:us"
] | null | "2023-12-13T14:31:33Z" | ---
license: mit
---
# Amphion Vocoder Pretrained Models
We provide a [HiFi-GAN](https://github.com/open-mmlab/Amphion/tree/main/egs/vocoder/gan/tfr_enhanced_hifigan) pretrained checkpoint for speech, which is trained on 685 hours of speech data.
## Quick Start
To utilize these pretrained vocoders, just run the following commands:
### Step1: Download the checkpoint
```bash
git lfs install
git clone https://huggingface.co/amphion/hifigan_speech_bigdata
```
### Step2: Clone the Amphion's Source Code of GitHub
```bash
git clone https://github.com/open-mmlab/Amphion.git
```
### Step3: Specify the checkpoint's path
Use the soft link to specify the downloaded checkpoint in the first step:
```bash
cd Amphion
mkdir -p ckpts/vocoder
ln -s "$(realpath ../hifigan_speech_bigdata/hifigan_speech)" pretrained/hifigan_speech
```
### Step4: Inference
For analysis synthesis on the processed dataset, raw waveform, or predicted mel spectrograms, you can follow the inference part of [this recipe](https://github.com/open-mmlab/Amphion/blob/main/egs/vocoder/gan/tfr_enhanced_hifigan/README.md).
```bash
sh egs/vocoder/gan/tfr_enhanced_hifigan/run.sh --stage 3 \
--infer_mode [Your chosen inference mode] \
--infer_datasets [Datasets you want to inference, needed when infer_from_dataset] \
--infer_feature_dir [Your path to your predicted acoustic features, needed when infer_from_feature] \
--infer_audio_dir [Your path to your audio files, needed when infer_form_audio] \
--infer_expt_dir Amphion/ckpts/vocoder/[YourExptName] \
--infer_output_dir Amphion/ckpts/vocoder/[YourExptName]/result \
```
## Citaions
```bibtex
@misc{gu2023cqt,
title={Multi-Scale Sub-Band Constant-Q Transform Discriminator for High-Fidelity Vocoder},
author={Yicheng Gu and Xueyao Zhang and Liumeng Xue and Zhizheng Wu},
year={2023},
eprint={2311.14957},
archivePrefix={arXiv},
primaryClass={cs.SD}
}
``` |
Tashin/zachet | Tashin | "2023-12-13T19:30:08Z" | 0 | 0 | keras | [
"keras",
"tf-keras",
"ru",
"dataset:mnist",
"region:us"
] | null | "2023-12-13T14:33:57Z" | ---
datasets:
- mnist
language:
- ru
metrics:
- accuracy
library_name: keras
---
1. Описание задачи которую выполняет НС.
Вариант 6. Используя датасет mnist был построен автоэнкодер, принимающий на вход изображение цифры и
создающий её же изображение на выходе.
2. Изображение послойной архитектуры НС на которой указаны размеры слоя, функция
активации.
![](arhitectura.png)
3. Общее количество обучаемых параметров НС.
Оно составляет 131457, можно увидеть в коде.
4. Используемый алгоритмы оптимизации и функция ошибки.
Алгоритм оптимизации - adam, функция ошибки - mse (mean_squared_error).
5. Размеры тренировочного, валидационного и тестового датасетов.
Тренировочный равен 48 000.
Тестовый равен 10 000.
Валидационный равен 12 000 (то есть 20% от изначального 60 000 тренировочного датасета)
6. Результаты обучения модели: loss и accuracy на всех трёх датасетах.
![](loss_and_accuracy.png)
Для тестового датасета loss: 0.0339 и accuracy: 0.0097
Результат обучения:
![](output.png) |
LarryAIDraw/rumi_bluearchive | LarryAIDraw | "2023-12-13T14:42:22Z" | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2023-12-13T14:36:15Z" | ---
license: creativeml-openrail-m
---
https://civitai.com/models/227310/rumi-blue-archive |
LarryAIDraw/CHAR-IzayoiMiku | LarryAIDraw | "2023-12-13T14:42:35Z" | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2023-12-13T14:36:38Z" | ---
license: creativeml-openrail-m
---
https://civitai.com/models/228682/miku-izayoi-3-outfits-or-date-a-live |
Manelzc/ViT_Proyect | Manelzc | "2023-12-21T14:01:11Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-12-13T14:36:51Z" | Entry not found |
LarryAIDraw/Acheron-08 | LarryAIDraw | "2023-12-13T14:42:49Z" | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2023-12-13T14:37:00Z" | ---
license: creativeml-openrail-m
---
https://civitai.com/models/231009/acheron-honkai-star-rail-lora |
amphion/BigVGAN_singing_bigdata | amphion | "2023-12-21T10:21:22Z" | 0 | 2 | null | [
"arxiv:2311.14957",
"license:mit",
"region:us"
] | null | "2023-12-13T14:37:02Z" | ---
license: mit
---
# Amphion Vocoder Pretrained Models
We provide the a [BigVGAN](https://github.com/open-mmlab/Amphion/tree/main/egs/vocoder/gan) pretrained checkpoint for singing voice, which is trained on over 120 hours of singing voice data.
## Quick Start
To utilize these pretrained vocoders, just run the following commands:
### Step1: Download the checkpoint
```bash
git lfs install
git clone https://huggingface.co/amphion/BigVGAN_singing_bigdata
```
### Step2: Clone the Amphion's Source Code of GitHub
```bash
git clone https://github.com/open-mmlab/Amphion.git
```
### Step3: Specify the checkpoint's path
Use the soft link to specify the downloaded checkpoint in the first step:
```bash
cd Amphion
mkdir -p ckpts/vocoder
ln -s "$(realpath ../BigVGAN_singing_bigdata/bigvgan_singing)" pretrained/bigvgan_singing
```
### Step4: Inference
For analysis synthesis on the processed dataset, raw waveform or predicted mel spectrograms, you can follow the inference part of [this recipe](https://github.com/open-mmlab/Amphion/blob/main/egs/vocoder/gan/tfr_enhanced_hifigan/README.md).
```bash
sh egs/vocoder/gan/tfr_enhanced_hifigan/run.sh --stage 3 \
--infer_mode [Your chosen inference mode] \
--infer_datasets [Datasets you want to inference, needed when infer_from_dataset] \
--infer_feature_dir [Your path to your predicted acoustic features, needed when infer_from_feature] \
--infer_audio_dir [Your path to your audio files, needed when infer_form_audio] \
--infer_expt_dir Amphion/ckpts/vocoder/[YourExptName] \
--infer_output_dir Amphion/ckpts/vocoder/[YourExptName]/result \
```
## Citaions
```bibtex
@misc{gu2023cqt,
title={Multi-Scale Sub-Band Constant-Q Transform Discriminator for High-Fidelity Vocoder},
author={Yicheng Gu and Xueyao Zhang and Liumeng Xue and Zhizheng Wu},
year={2023},
eprint={2311.14957},
archivePrefix={arXiv},
primaryClass={cs.SD}
}
``` |
paulovsantanas/llm-deberta-v3-swag | paulovsantanas | "2023-12-13T14:39:23Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"deberta-v2",
"multiple-choice",
"generated_from_trainer",
"en",
"dataset:swag",
"base_model:microsoft/deberta-v3-base",
"license:mit",
"model-index",
"endpoints_compatible",
"region:us"
] | multiple-choice | "2023-12-13T14:37:13Z" | ---
language:
- en
license: mit
base_model: microsoft/deberta-v3-base
tags:
- generated_from_trainer
datasets:
- swag
metrics:
- accuracy
model-index:
- name: llm-deberta-v3-swag
results:
- task:
name: Multiple Choice
type: multiple-choice
dataset:
name: SWAG
type: swag
config: regular
split: validation
args: regular
metrics:
- name: Accuracy
type: accuracy
value: 0.8679895997047424
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llm-deberta-v3-swag
This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on the SWAG dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7839
- Accuracy: 0.8680
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
LarryAIDraw/Genshin_Rosaria | LarryAIDraw | "2023-12-13T14:42:59Z" | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2023-12-13T14:37:25Z" | ---
license: creativeml-openrail-m
---
https://civitai.com/models/231262/rosaria-genshin-impact |
LarryAIDraw/itsuki_nakano_final_final | LarryAIDraw | "2023-12-13T14:43:10Z" | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2023-12-13T14:37:55Z" | ---
license: creativeml-openrail-m
---
https://civitai.com/models/231294/nakano-itsuki |
Defetya/Falcon-JAX-super-exp | Defetya | "2023-12-13T14:38:17Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2023-12-13T14:38:17Z" | ---
license: apache-2.0
---
|
ecc-andrewb/sdv2_players | ecc-andrewb | "2023-12-13T15:14:23Z" | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2023-12-13T14:38:27Z" |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: football player, white background
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - ecc-andrewb/sdv2_players
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on football player, white background using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: True.
|
LarryAIDraw/lappland | LarryAIDraw | "2023-12-13T14:43:19Z" | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2023-12-13T14:38:37Z" | ---
license: creativeml-openrail-m
---
https://civitai.com/models/224838/lappland-arknights-or-goofy-ai |
LarryAIDraw/Roxanne | LarryAIDraw | "2023-12-13T14:43:29Z" | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2023-12-13T14:39:16Z" | ---
license: creativeml-openrail-m
---
https://civitai.com/models/230904/roxanne-or-slave-harem-in-the-labyrinth-of-the-other-world |
shabarish-balaji/midjourney-falcon-7b | shabarish-balaji | "2023-12-13T14:39:51Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-12-13T14:39:47Z" | Entry not found |
aaa12963337/msi-mini2 | aaa12963337 | "2023-12-13T15:28:02Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"nat",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:shi-labs/nat-mini-in1k-224",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2023-12-13T14:41:01Z" | ---
license: mit
base_model: shi-labs/nat-mini-in1k-224
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: msi-mini2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# msi-mini2
This model is a fine-tuned version of [shi-labs/nat-mini-in1k-224](https://huggingface.co/shi-labs/nat-mini-in1k-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- eval_loss: 1.0132
- eval_accuracy: 0.6103
- eval_runtime: 97.1589
- eval_samples_per_second: 294.528
- eval_steps_per_second: 18.413
- epoch: 2.0
- step: 4031
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Framework versions
- Transformers 4.35.2
- Pytorch 2.0.1+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
hkivancoral/smids_3x_beit_base_sgd_00001_fold1 | hkivancoral | "2023-12-13T14:41:12Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-12-13T14:41:12Z" | Entry not found |
mantagen/dreambooth_mg_portraits | mantagen | "2023-12-13T14:46:42Z" | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2023-12-13T14:41:15Z" |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: a photo of a mantagen person
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - mantagen/dreambooth_mg_portraits
This is a dreambooth model derived from runwayml/stable-diffusion-v1-5. The weights were trained on a photo of a mantagen person using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
tanooki426/ValerieBlaylock | tanooki426 | "2023-12-13T14:46:15Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2023-12-13T14:43:03Z" | ---
license: openrail
---
|
HandSemLin/my_awesome_model | HandSemLin | "2023-12-13T14:43:41Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-12-13T14:43:41Z" | Entry not found |
paulovsantanas/llm-mdeberta-v3-swag | paulovsantanas | "2023-12-18T04:12:58Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"deberta-v2",
"multiple-choice",
"generated_from_trainer",
"en",
"dataset:swag",
"base_model:microsoft/mdeberta-v3-base",
"license:mit",
"model-index",
"endpoints_compatible",
"region:us"
] | multiple-choice | "2023-12-13T14:45:04Z" | ---
language:
- en
license: mit
base_model: microsoft/mdeberta-v3-base
tags:
- generated_from_trainer
datasets:
- swag
metrics:
- accuracy
model-index:
- name: llm-mdeberta-v3-swag
results:
- task:
name: Multiple Choice
type: multiple-choice
dataset:
name: SWAG
type: swag
config: regular
split: validation
args: regular
metrics:
- name: Accuracy
type: accuracy
value: 0.777816653251648
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llm-mdeberta-v3-swag
This model is a fine-tuned version of [microsoft/mdeberta-v3-base](https://huggingface.co/microsoft/mdeberta-v3-base) on the SWAG dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8202
- Accuracy: 0.7778
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.37.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
imi2/Llama-2-70b-longlora-32k-adapter-ggml | imi2 | "2023-12-13T14:47:54Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-12-13T14:46:53Z" | Entry not found |
hkivancoral/smids_3x_beit_base_sgd_001_fold4 | hkivancoral | "2023-12-14T00:52:25Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"beit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/beit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2023-12-13T14:50:54Z" | ---
license: apache-2.0
base_model: microsoft/beit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: smids_3x_beit_base_sgd_001_fold4
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8483333333333334
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_3x_beit_base_sgd_001_fold4
This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3939
- Accuracy: 0.8483
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.8245 | 1.0 | 225 | 0.8340 | 0.6383 |
| 0.6387 | 2.0 | 450 | 0.6132 | 0.7483 |
| 0.532 | 3.0 | 675 | 0.5292 | 0.7867 |
| 0.4946 | 4.0 | 900 | 0.4935 | 0.8017 |
| 0.5105 | 5.0 | 1125 | 0.4602 | 0.8217 |
| 0.3964 | 6.0 | 1350 | 0.4420 | 0.8217 |
| 0.4068 | 7.0 | 1575 | 0.4284 | 0.83 |
| 0.4501 | 8.0 | 1800 | 0.4257 | 0.8217 |
| 0.3713 | 9.0 | 2025 | 0.4132 | 0.835 |
| 0.3427 | 10.0 | 2250 | 0.4081 | 0.8383 |
| 0.4054 | 11.0 | 2475 | 0.4089 | 0.8367 |
| 0.3818 | 12.0 | 2700 | 0.4017 | 0.84 |
| 0.3036 | 13.0 | 2925 | 0.4061 | 0.8317 |
| 0.2784 | 14.0 | 3150 | 0.3991 | 0.84 |
| 0.2822 | 15.0 | 3375 | 0.3953 | 0.8383 |
| 0.3106 | 16.0 | 3600 | 0.3913 | 0.8383 |
| 0.2716 | 17.0 | 3825 | 0.3985 | 0.8367 |
| 0.3166 | 18.0 | 4050 | 0.3943 | 0.8417 |
| 0.334 | 19.0 | 4275 | 0.3982 | 0.8333 |
| 0.2592 | 20.0 | 4500 | 0.3982 | 0.8383 |
| 0.2836 | 21.0 | 4725 | 0.3926 | 0.8367 |
| 0.2688 | 22.0 | 4950 | 0.3918 | 0.8417 |
| 0.2602 | 23.0 | 5175 | 0.3951 | 0.8417 |
| 0.2941 | 24.0 | 5400 | 0.3932 | 0.8417 |
| 0.254 | 25.0 | 5625 | 0.3963 | 0.8433 |
| 0.2248 | 26.0 | 5850 | 0.3967 | 0.8417 |
| 0.2349 | 27.0 | 6075 | 0.3902 | 0.8417 |
| 0.2318 | 28.0 | 6300 | 0.3960 | 0.8417 |
| 0.2339 | 29.0 | 6525 | 0.3900 | 0.8467 |
| 0.2256 | 30.0 | 6750 | 0.3940 | 0.8483 |
| 0.2306 | 31.0 | 6975 | 0.3948 | 0.84 |
| 0.1769 | 32.0 | 7200 | 0.3920 | 0.8433 |
| 0.2714 | 33.0 | 7425 | 0.3958 | 0.8483 |
| 0.2441 | 34.0 | 7650 | 0.3973 | 0.845 |
| 0.2336 | 35.0 | 7875 | 0.3946 | 0.8483 |
| 0.2411 | 36.0 | 8100 | 0.3957 | 0.8517 |
| 0.2513 | 37.0 | 8325 | 0.3968 | 0.845 |
| 0.2269 | 38.0 | 8550 | 0.3976 | 0.8467 |
| 0.2515 | 39.0 | 8775 | 0.3973 | 0.8517 |
| 0.2727 | 40.0 | 9000 | 0.3940 | 0.8467 |
| 0.2023 | 41.0 | 9225 | 0.3933 | 0.845 |
| 0.2359 | 42.0 | 9450 | 0.3953 | 0.85 |
| 0.2348 | 43.0 | 9675 | 0.3957 | 0.8483 |
| 0.2703 | 44.0 | 9900 | 0.3944 | 0.8517 |
| 0.2898 | 45.0 | 10125 | 0.3951 | 0.8483 |
| 0.2247 | 46.0 | 10350 | 0.3937 | 0.85 |
| 0.2326 | 47.0 | 10575 | 0.3934 | 0.85 |
| 0.2372 | 48.0 | 10800 | 0.3941 | 0.8483 |
| 0.2457 | 49.0 | 11025 | 0.3940 | 0.8483 |
| 0.2302 | 50.0 | 11250 | 0.3939 | 0.8483 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
IshimaIshimsky/ifrit | IshimaIshimsky | "2023-12-13T14:53:22Z" | 0 | 0 | null | [
"ru",
"license:unknown",
"region:us"
] | null | "2023-12-13T14:51:31Z" | ---
license: unknown
language:
- ru
---
Ифрит/Ifrit rmvpe 250epoches |
bbillapati/distilhubert-finetuned-gtzan | bbillapati | "2023-12-28T10:35:00Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:gtzan",
"base_model:ntu-spml/distilhubert",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | audio-classification | "2023-12-13T14:52:49Z" | ---
license: apache-2.0
base_model: ntu-spml/distilhubert
tags:
- generated_from_trainer
datasets:
- gtzan
metrics:
- accuracy
model-index:
- name: distilhubert-finetuned-gtzan
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: gtzan
type: gtzan
config: all
split: train
args: all
metrics:
- name: Accuracy
type: accuracy
value: 0.83
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-gtzan
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the gtzan dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9983
- Accuracy: 0.83
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.0454 | 1.0 | 113 | 1.9653 | 0.56 |
| 1.3054 | 2.0 | 226 | 1.3132 | 0.7 |
| 0.9479 | 3.0 | 339 | 0.9596 | 0.75 |
| 0.7291 | 4.0 | 452 | 0.8110 | 0.75 |
| 0.6033 | 5.0 | 565 | 0.7330 | 0.81 |
| 0.2973 | 6.0 | 678 | 0.7070 | 0.79 |
| 0.3574 | 7.0 | 791 | 0.6908 | 0.83 |
| 0.2078 | 8.0 | 904 | 0.7105 | 0.83 |
| 0.1569 | 9.0 | 1017 | 0.7204 | 0.83 |
| 0.0812 | 10.0 | 1130 | 0.7471 | 0.84 |
| 0.0451 | 11.0 | 1243 | 0.8439 | 0.85 |
| 0.0148 | 12.0 | 1356 | 0.9538 | 0.83 |
| 0.0096 | 13.0 | 1469 | 0.9364 | 0.84 |
| 0.0084 | 14.0 | 1582 | 0.9808 | 0.83 |
| 0.0084 | 15.0 | 1695 | 0.9983 | 0.83 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.0.1
- Datasets 2.14.5
- Tokenizers 0.15.0
|
hkivancoral/smids_3x_beit_base_rms_0001_fold4 | hkivancoral | "2023-12-13T15:42:13Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"beit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/beit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2023-12-13T14:53:44Z" | ---
license: apache-2.0
base_model: microsoft/beit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: smids_3x_beit_base_rms_0001_fold4
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.81
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_3x_beit_base_rms_0001_fold4
This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8372
- Accuracy: 0.81
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.787 | 1.0 | 225 | 0.7726 | 0.5917 |
| 0.7085 | 2.0 | 450 | 0.7059 | 0.6467 |
| 0.6145 | 3.0 | 675 | 0.6591 | 0.6983 |
| 0.6005 | 4.0 | 900 | 0.5422 | 0.7817 |
| 0.572 | 5.0 | 1125 | 0.5970 | 0.7567 |
| 0.4372 | 6.0 | 1350 | 0.5610 | 0.785 |
| 0.3918 | 7.0 | 1575 | 0.5957 | 0.7917 |
| 0.4058 | 8.0 | 1800 | 0.5296 | 0.7933 |
| 0.3971 | 9.0 | 2025 | 0.6041 | 0.7833 |
| 0.3274 | 10.0 | 2250 | 0.5347 | 0.8 |
| 0.2417 | 11.0 | 2475 | 0.6768 | 0.785 |
| 0.1989 | 12.0 | 2700 | 0.6501 | 0.8133 |
| 0.2222 | 13.0 | 2925 | 0.6337 | 0.7933 |
| 0.1654 | 14.0 | 3150 | 0.7865 | 0.7867 |
| 0.1241 | 15.0 | 3375 | 0.7840 | 0.8033 |
| 0.1208 | 16.0 | 3600 | 0.9856 | 0.795 |
| 0.0877 | 17.0 | 3825 | 1.0442 | 0.7767 |
| 0.1165 | 18.0 | 4050 | 0.9465 | 0.8117 |
| 0.1328 | 19.0 | 4275 | 0.8299 | 0.81 |
| 0.0427 | 20.0 | 4500 | 1.1880 | 0.7917 |
| 0.0826 | 21.0 | 4725 | 1.0665 | 0.8083 |
| 0.0679 | 22.0 | 4950 | 1.2201 | 0.7917 |
| 0.1018 | 23.0 | 5175 | 1.1824 | 0.8 |
| 0.0255 | 24.0 | 5400 | 1.2359 | 0.8117 |
| 0.0956 | 25.0 | 5625 | 1.2156 | 0.805 |
| 0.0725 | 26.0 | 5850 | 1.3671 | 0.81 |
| 0.0849 | 27.0 | 6075 | 1.3399 | 0.7917 |
| 0.068 | 28.0 | 6300 | 1.3279 | 0.8117 |
| 0.0512 | 29.0 | 6525 | 1.1460 | 0.82 |
| 0.0439 | 30.0 | 6750 | 1.4730 | 0.8017 |
| 0.0414 | 31.0 | 6975 | 1.2224 | 0.8067 |
| 0.0174 | 32.0 | 7200 | 1.6967 | 0.7983 |
| 0.0407 | 33.0 | 7425 | 1.5401 | 0.7983 |
| 0.0316 | 34.0 | 7650 | 1.2844 | 0.8017 |
| 0.0008 | 35.0 | 7875 | 1.7477 | 0.805 |
| 0.0104 | 36.0 | 8100 | 1.5173 | 0.8167 |
| 0.0005 | 37.0 | 8325 | 1.6340 | 0.7967 |
| 0.0286 | 38.0 | 8550 | 1.4323 | 0.7983 |
| 0.0292 | 39.0 | 8775 | 1.4953 | 0.805 |
| 0.0108 | 40.0 | 9000 | 1.6930 | 0.8183 |
| 0.022 | 41.0 | 9225 | 1.7083 | 0.8033 |
| 0.0101 | 42.0 | 9450 | 1.8030 | 0.8083 |
| 0.0122 | 43.0 | 9675 | 1.8925 | 0.8133 |
| 0.0071 | 44.0 | 9900 | 1.7250 | 0.815 |
| 0.0004 | 45.0 | 10125 | 1.7937 | 0.8017 |
| 0.0008 | 46.0 | 10350 | 1.9056 | 0.8067 |
| 0.0003 | 47.0 | 10575 | 1.8311 | 0.8083 |
| 0.0001 | 48.0 | 10800 | 1.9401 | 0.8033 |
| 0.0001 | 49.0 | 11025 | 1.8499 | 0.8083 |
| 0.0 | 50.0 | 11250 | 1.8372 | 0.81 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
danlindb/Reinforce-Pixelcopter-PLE-v0 | danlindb | "2023-12-13T14:56:59Z" | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | "2023-12-13T14:56:10Z" | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 56.80 +/- 43.03
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
EmmaGthn/results_lora_40_3000 | EmmaGthn | "2023-12-13T17:10:22Z" | 0 | 0 | null | [
"tensorboard",
"safetensors",
"region:us"
] | null | "2023-12-13T14:58:57Z" | Entry not found |
Ogpoggi/donut_receipt | Ogpoggi | "2023-12-13T15:00:14Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"vision-encoder-decoder",
"endpoints_compatible",
"region:us"
] | null | "2023-12-13T14:59:15Z" | Entry not found |
vvwvvw/zhaiyao_zuoye | vvwvvw | "2023-12-13T15:00:26Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-12-13T15:00:26Z" | Entry not found |
Aqua134/Info | Aqua134 | "2023-12-13T15:01:00Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2023-12-13T15:01:00Z" | ---
license: apache-2.0
---
|
simoHamlili/chatbot413 | simoHamlili | "2023-12-13T15:05:30Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-12-13T15:05:30Z" | Entry not found |
mwezizar/my_awesome_model | mwezizar | "2023-12-13T15:06:05Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-12-13T15:06:05Z" | Entry not found |
GlycerinLOL/LLM_Teach_Bart | GlycerinLOL | "2023-12-13T19:59:15Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"base_model:facebook/bart-large-xsum",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2023-12-13T15:07:57Z" | ---
license: mit
base_model: facebook/bart-large-xsum
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: LLM_Teach_Bart
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# LLM_Teach_Bart
This model is a fine-tuned version of [facebook/bart-large-xsum](https://huggingface.co/facebook/bart-large-xsum) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8314
- Rouge1: 0.4848
- Rouge2: 0.215
- Rougel: 0.3765
- Rougelsum: 0.3762
- Gen Len: 44.2945
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 1.7164 | 1.0 | 625 | 1.7203 | 0.4724 | 0.2088 | 0.3677 | 0.3675 | 44.1491 |
| 1.3424 | 2.0 | 1250 | 1.6998 | 0.4841 | 0.2167 | 0.3705 | 0.3699 | 45.3727 |
| 1.1171 | 3.0 | 1875 | 1.7546 | 0.4824 | 0.2144 | 0.3735 | 0.3735 | 43.7636 |
| 0.8193 | 4.0 | 2500 | 1.8314 | 0.4848 | 0.215 | 0.3765 | 0.3762 | 44.2945 |
### Framework versions
- Transformers 4.36.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.15.0
|
mantagen/dreambooth_patcha | mantagen | "2023-12-13T15:20:52Z" | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2023-12-13T15:08:00Z" |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: a photo of patcha
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - mantagen/dreambooth_patcha
This is a dreambooth model derived from runwayml/stable-diffusion-v1-5. The weights were trained on a photo of patcha using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
manojpawar/test | manojpawar | "2023-12-13T15:08:09Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2023-12-13T15:08:09Z" | ---
license: apache-2.0
---
|
sandorscog/test_trainer | sandorscog | "2023-12-13T15:09:27Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-12-13T15:09:26Z" | Entry not found |
kienprb/lao_viet_translation | kienprb | "2023-12-15T01:24:46Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-12-13T15:11:03Z" | Muốn dịch đúng như báo cáo.
Đầu tiên ta phải chạy file cleandata.py với câu lệnh:
python cleandata.py --input_path path_data --out_path save_path
Dữ liệu sẽ được clean data và lưu lại và đường dẫn save path
Vd: forder input data có dạng:
input_data:
test.lo
test.vi
muốn lưu vào forder output_data sẽ chạy:
python cleandata.py --input_path 'input_data/test' --out_path 'output_data/test'
dữ liệu mới sẽ được lưu vào hai file:
output_data/test.lo
output_data/test.vi |
situty/academia | situty | "2023-12-13T15:11:11Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2023-12-13T15:11:11Z" | ---
license: apache-2.0
---
|
seatond/NEW_badprompt_rank16_lr1.5e-05_target8_epochs1_laplha32_wuratio0.125_wdecay0.25 | seatond | "2023-12-13T15:27:40Z" | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"arxiv:1910.09700",
"base_model:TheBloke/Mistral-7B-v0.1-GPTQ",
"region:us"
] | null | "2023-12-13T15:11:32Z" | ---
library_name: peft
base_model: TheBloke/Mistral-7B-v0.1-GPTQ
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1.dev0 |
situty/situ | situty | "2023-12-13T15:12:06Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-12-13T15:12:06Z" | Entry not found |
simoHamlili/results | simoHamlili | "2023-12-13T15:14:59Z" | 0 | 0 | null | [
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"region:us"
] | null | "2023-12-13T15:14:08Z" | ---
base_model: meta-llama/Llama-2-7b-chat-hf
tags:
- generated_from_trainer
model-index:
- name: chatbot413
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# chatbot413
This model is a fine-tuned version of [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.09
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.13.3
|
Ammad1Ali/m2m100_1.2B-2.0 | Ammad1Ali | "2023-12-13T15:39:06Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"m2m_100",
"text2text-generation",
"multilingual",
"af",
"am",
"ar",
"ast",
"az",
"ba",
"be",
"bg",
"bn",
"br",
"bs",
"ca",
"ceb",
"cs",
"cy",
"da",
"de",
"el",
"en",
"es",
"et",
"fa",
"ff",
"fi",
"fr",
"fy",
"ga",
"gd",
"gl",
"gu",
"ha",
"he",
"hi",
"hr",
"ht",
"hu",
"hy",
"id",
"ig",
"ilo",
"is",
"it",
"ja",
"jv",
"ka",
"kk",
"km",
"kn",
"ko",
"lb",
"lg",
"ln",
"lo",
"lt",
"lv",
"mg",
"mk",
"ml",
"mn",
"mr",
"ms",
"my",
"ne",
"nl",
"no",
"ns",
"oc",
"or",
"pa",
"pl",
"ps",
"pt",
"ro",
"ru",
"sd",
"si",
"sk",
"sl",
"so",
"sq",
"sr",
"ss",
"su",
"sv",
"sw",
"ta",
"th",
"tl",
"tn",
"tr",
"uk",
"ur",
"uz",
"vi",
"wo",
"xh",
"yi",
"yo",
"zh",
"zu",
"arxiv:2010.11125",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2023-12-13T15:14:28Z" | ---
language:
- multilingual
- af
- am
- ar
- ast
- az
- ba
- be
- bg
- bn
- br
- bs
- ca
- ceb
- cs
- cy
- da
- de
- el
- en
- es
- et
- fa
- ff
- fi
- fr
- fy
- ga
- gd
- gl
- gu
- ha
- he
- hi
- hr
- ht
- hu
- hy
- id
- ig
- ilo
- is
- it
- ja
- jv
- ka
- kk
- km
- kn
- ko
- lb
- lg
- ln
- lo
- lt
- lv
- mg
- mk
- ml
- mn
- mr
- ms
- my
- ne
- nl
- no
- ns
- oc
- or
- pa
- pl
- ps
- pt
- ro
- ru
- sd
- si
- sk
- sl
- so
- sq
- sr
- ss
- su
- sv
- sw
- ta
- th
- tl
- tn
- tr
- uk
- ur
- uz
- vi
- wo
- xh
- yi
- yo
- zh
- zu
license: mit
---
# *This is the same model but fine-tuned on most recent and newer dataset to make it more relatable to modern language*
# M2M100 1.2B
M2M100 is a multilingual encoder-decoder (seq-to-seq) model trained for Many-to-Many multilingual translation.
It was introduced in this [paper](https://arxiv.org/abs/2010.11125) and first released in [this](https://github.com/pytorch/fairseq/tree/master/examples/m2m_100) repository.
The model that can directly translate between the 9,900 directions of 100 languages.
To translate into a target language, the target language id is forced as the first generated token.
To force the target language id as the first generated token, pass the `forced_bos_token_id` parameter to the `generate` method.
*Note: `M2M100Tokenizer` depends on `sentencepiece`, so make sure to install it before running the example.*
To install `sentencepiece` run `pip install sentencepiece`
```python
from transformers import M2M100ForConditionalGeneration, M2M100Tokenizer
hi_text = "जीवन एक चॉकलेट बॉक्स की तरह है।"
chinese_text = "生活就像一盒巧克力。"
model = M2M100ForConditionalGeneration.from_pretrained("facebook/m2m100_1.2B")
tokenizer = M2M100Tokenizer.from_pretrained("facebook/m2m100_1.2B")
# translate Hindi to French
tokenizer.src_lang = "hi"
encoded_hi = tokenizer(hi_text, return_tensors="pt")
generated_tokens = model.generate(**encoded_hi, forced_bos_token_id=tokenizer.get_lang_id("fr"))
tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
# => "La vie est comme une boîte de chocolat."
# translate Chinese to English
tokenizer.src_lang = "zh"
encoded_zh = tokenizer(chinese_text, return_tensors="pt")
generated_tokens = model.generate(**encoded_zh, forced_bos_token_id=tokenizer.get_lang_id("en"))
tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
# => "Life is like a box of chocolate."
```
See the [model hub](https://huggingface.co/models?filter=m2m_100) to look for more fine-tuned versions.
## Languages covered
Afrikaans (af), Amharic (am), Arabic (ar), Asturian (ast), Azerbaijani (az), Bashkir (ba), Belarusian (be), Bulgarian (bg), Bengali (bn), Breton (br), Bosnian (bs), Catalan; Valencian (ca), Cebuano (ceb), Czech (cs), Welsh (cy), Danish (da), German (de), Greeek (el), English (en), Spanish (es), Estonian (et), Persian (fa), Fulah (ff), Finnish (fi), French (fr), Western Frisian (fy), Irish (ga), Gaelic; Scottish Gaelic (gd), Galician (gl), Gujarati (gu), Hausa (ha), Hebrew (he), Hindi (hi), Croatian (hr), Haitian; Haitian Creole (ht), Hungarian (hu), Armenian (hy), Indonesian (id), Igbo (ig), Iloko (ilo), Icelandic (is), Italian (it), Japanese (ja), Javanese (jv), Georgian (ka), Kazakh (kk), Central Khmer (km), Kannada (kn), Korean (ko), Luxembourgish; Letzeburgesch (lb), Ganda (lg), Lingala (ln), Lao (lo), Lithuanian (lt), Latvian (lv), Malagasy (mg), Macedonian (mk), Malayalam (ml), Mongolian (mn), Marathi (mr), Malay (ms), Burmese (my), Nepali (ne), Dutch; Flemish (nl), Norwegian (no), Northern Sotho (ns), Occitan (post 1500) (oc), Oriya (or), Panjabi; Punjabi (pa), Polish (pl), Pushto; Pashto (ps), Portuguese (pt), Romanian; Moldavian; Moldovan (ro), Russian (ru), Sindhi (sd), Sinhala; Sinhalese (si), Slovak (sk), Slovenian (sl), Somali (so), Albanian (sq), Serbian (sr), Swati (ss), Sundanese (su), Swedish (sv), Swahili (sw), Tamil (ta), Thai (th), Tagalog (tl), Tswana (tn), Turkish (tr), Ukrainian (uk), Urdu (ur), Uzbek (uz), Vietnamese (vi), Wolof (wo), Xhosa (xh), Yiddish (yi), Yoruba (yo), Chinese (zh), Zulu (zu)
## BibTeX entry and citation info
```
@misc{fan2020englishcentric,
title={Beyond English-Centric Multilingual Machine Translation},
author={Angela Fan and Shruti Bhosale and Holger Schwenk and Zhiyi Ma and Ahmed El-Kishky and Siddharth Goyal and Mandeep Baines and Onur Celebi and Guillaume Wenzek and Vishrav Chaudhary and Naman Goyal and Tom Birch and Vitaliy Liptchinsky and Sergey Edunov and Edouard Grave and Michael Auli and Armand Joulin},
year={2020},
eprint={2010.11125},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
sid220/asl-now-fingerspelling | sid220 | "2023-12-22T19:59:14Z" | 0 | 0 | keras | [
"keras",
"en",
"dataset:sid220/asl-now-fingerspelling",
"doi:10.57967/hf/1516",
"license:mit",
"region:us"
] | null | "2023-12-13T15:14:41Z" | ---
license: mit
datasets:
- sid220/asl-now-fingerspelling
language:
- en
metrics:
- accuracy
library_name: keras
---
# ASLNow!
ASLNow! is a web app designed to make learning ASL fingerspelling easy and fun! You can try it live at [asl-now.vercel.app](https://asl-now.vercel.app/).
Demo: [https://www.youtube.com/watch?v=Wi5tAxVasq8](https://www.youtube.com/watch?v=Wi5tAxVasq8)
## Model
This model, trained on the isolated fingerspelling dataset is licensed under the MIT License. It will be updated frequently as more data is collected.
### Format
![Overview of Model](images/plotted_model.png)
#### Input
21 hand landmarks, each composed of `x`, `y` and `z` coordinates. The `x` and `y` coordinates are normalized
to `[0.0, 1.0]` by the
image width and height, respectively. The `z` coordinate represents the landmark depth, with the depth at the wrist
being
the origin. The smaller the value, the closer the landmark is to the camera. The magnitude of `z` uses roughly the same
scale as x.
![Hand Landmarks](https://developers.google.com/static/mediapipe/images/solutions/hand-landmarks.png)
From: [https://developers.google.com/mediapipe/solutions/vision/hand_landmarker](https://developers.google.com/mediapipe/solutions/vision/hand_landmarker)
Example:
```
[
# Landmark 1
[x, y, z],
# Landmark 2
[x, y, z],
...
# Landmark 20
[x, y, z]
# Landmark 21
[x, y, z]
]
```
#### Output
The probability of each class, where classes are defined as such:
```json
{
"A": 0,
"B": 1,
"C": 2,
"D": 3,
"E": 4,
"F": 5,
"G": 6,
"H": 7,
"I": 8,
"J": 9,
"K": 10,
"L": 11,
"M": 12,
"N": 13,
"O": 14,
"P": 15,
"Q": 16,
"R": 17,
"S": 18,
"T": 19,
"U": 20,
"V": 21,
"W": 22,
"X": 23,
"Y": 24,
"Z": 25
}
``` |
suncy13/sample_footColor | suncy13 | "2023-12-13T15:19:05Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"dinov2",
"image-classification",
"generated_from_trainer",
"base_model:facebook/dinov2-small-imagenet1k-1-layer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2023-12-13T15:14:59Z" | ---
license: apache-2.0
base_model: facebook/dinov2-small-imagenet1k-1-layer
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: sample_footColor
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sample_footColor
This model is a fine-tuned version of [facebook/dinov2-small-imagenet1k-1-layer](https://huggingface.co/facebook/dinov2-small-imagenet1k-1-layer) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0480
- Accuracy: 0.55
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.9818 | 1.0 | 12 | 1.4086 | 0.55 |
| 4.0926 | 2.0 | 24 | 1.1745 | 0.55 |
| 1.9427 | 3.0 | 36 | 1.8742 | 0.2 |
| 4.0665 | 4.0 | 48 | 1.0673 | 0.55 |
| 2.225 | 5.0 | 60 | 1.3158 | 0.55 |
| 1.4044 | 6.0 | 72 | 1.1891 | 0.55 |
| 1.6696 | 7.0 | 84 | 1.0104 | 0.55 |
| 1.1405 | 8.0 | 96 | 1.1005 | 0.55 |
| 1.5148 | 9.0 | 108 | 1.0531 | 0.55 |
| 1.1416 | 10.0 | 120 | 1.0480 | 0.55 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
FranzderPapst/MA | FranzderPapst | "2023-12-13T15:19:06Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-12-13T15:19:06Z" | Entry not found |
ADISH007/Aws_donut_10k_incremental_1_Epoch_11 | ADISH007 | "2023-12-13T15:20:55Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"vision-encoder-decoder",
"endpoints_compatible",
"region:us"
] | null | "2023-12-13T15:20:35Z" | Entry not found |
aungshuman-cavallo/nl2sqldemo | aungshuman-cavallo | "2023-12-13T15:26:12Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2023-12-13T15:23:05Z" | Entry not found |
Tachi67/CoderFlowModule | Tachi67 | "2024-01-04T00:59:39Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-12-13T15:24:08Z" | ### Structure of Coder
```
goal, memory_files (dict)
|
v
+-------------------+
| MemoryReading | Reads in the content of the memory files
| Flow |
+-------------------+
|
| (memory_content)
|
v
+-------------------+
| PlanWriter | Writes a step-by-step plan to achieve the goal
+-------------------+
|
| (plan)
|
v
+-------------------+
| CtrlExMemFlow | Illustrated below. Carries out the plan in an controller-executor fashion,
| | with memory management mechanisms.
+-------------------+
|
(summary, result)
```
Here is the structure of the `CtrlExMemFlow`:
```
plan, memory_files, memory_content, goal
|
v
+---------------+
| Controller | --------<<<<-----------+
+---------------+ |
| |
| (command, command args) |
| |
v |
+------------------+ |
| Executor | Each branch is an |
| (Tree Structure) | executor |
+------------------+ |
| ^
| (execution results) ^
| ^
v ^
+---------------+ ^
| MemWriteFlow | Updates memory files ^
+---------------+ ^
| ^
| (summary) |
| |
v |
+---------------+ |
| MemReadFlow | Reads updated memory |
+---------------+ |
| |
| (updated memory content) |
| |
+-> goes back to the Controller>-+
```
Structure of the Executors:
```
+-------------------+
| Branching |
| Executor |
+-------------------+
/ | | | \
/ | | | \
/ | | | \
/ | | | \
Extend_library ask_user re_plan update_plan run_code
```
Memory files of Coder:
- plan_coder.txt
- logs_coder.txt
- library.py
About the branches:
- [ExtendLibrary](https://huggingface.co/Tachi67/ExtendLibraryFlowModule): Writes and tests code functions in an interactive fashion, finally saves the written function to the code library.
- [ask_user](https://huggingface.co/Tachi67/ExtendLibraryFlowModule/blob/main/ExtLibAskUserFlow.py): Ask user for info / confirmation, etc.
- [re_plan](https://huggingface.co/Tachi67/ReplanningFlowModule): One branch of the executors, when something goes wrong, re-draft the plan.
- [update_plan](https://huggingface.co/Tachi67/JarvisFlowModule/blob/main/UpdatePlanAtomicFlow.py): One branch of the executors, when the controller realizes that one (or some, depending on the LLM's response) step of the plan is (are) done, it generates a new plan that marks the step(s) as done.
- [run_code](https://huggingface.co/Tachi67/RunCodeFlowModule): Runs the code written by the Controller, will first open up a temp file with the code for user confirmation and editing, then the code is passed to the [InterpreterFlow](https://huggingface.co/Tachi67/InterpreterFlowModule).
# Table of Contents
* [CtrlExMem\_CoderFlow](#CtrlExMem_CoderFlow)
* [CtrlExMem\_CoderFlow](#CtrlExMem_CoderFlow.CtrlExMem_CoderFlow)
* [run\_coder](#run_coder)
* [Planner\_CoderFlow](#Planner_CoderFlow)
* [Planner\_CoderFlow](#Planner_CoderFlow.Planner_CoderFlow)
* [Controller\_CoderFlow](#Controller_CoderFlow)
* [Controller\_CoderFlow](#Controller_CoderFlow.Controller_CoderFlow)
* [UpdatePlanAtomicFlow](#UpdatePlanAtomicFlow)
* [UpdatePlanAtomicFlow](#UpdatePlanAtomicFlow.UpdatePlanAtomicFlow)
* [CoderFlow](#CoderFlow)
* [CoderFlow](#CoderFlow.CoderFlow)
* [run](#CoderFlow.CoderFlow.run)
* [\_\_init\_\_](#__init__)
<a id="CtrlExMem_CoderFlow"></a>
# CtrlExMem\_CoderFlow
<a id="CtrlExMem_CoderFlow.CtrlExMem_CoderFlow"></a>
## CtrlExMem\_CoderFlow Objects
```python
class CtrlExMem_CoderFlow(CtrlExMemFlow)
```
This class inherits from the CtrlExMemFlow class from AbstractBossFlowModule.
See: https://huggingface.co/Tachi67/AbstractBossFlowModule/blob/main/CtrlExMemFlow.py
*Input Interface*:
- `plan`
- `logs`
- `code_library`: the signatures and docstring of the functions in the code library.
- `memory_files`
- `goal`
*Output Interface*
- `result` (str): The result of the flow, the result will be returned to the caller.
- `summary` (str): The summary of the flow, the summary will be logged into the logs of the caller flow.
<a id="run_coder"></a>
# run\_coder
<a id="Planner_CoderFlow"></a>
# Planner\_CoderFlow
<a id="Planner_CoderFlow.Planner_CoderFlow"></a>
## Planner\_CoderFlow Objects
```python
class Planner_CoderFlow(PlanWriterFlow)
```
Planner of the coder flow, it inherits from PlanWriterFlow, see: https://huggingface.co/Tachi67/PlanWriterFlowModule
<a id="Controller_CoderFlow"></a>
# Controller\_CoderFlow
<a id="Controller_CoderFlow.Controller_CoderFlow"></a>
## Controller\_CoderFlow Objects
```python
class Controller_CoderFlow(ChatAtomicFlow)
```
Refer to: https://huggingface.co/Tachi67/JarvisFlowModule/blob/main/Controller_JarvisFlow.py
<a id="UpdatePlanAtomicFlow"></a>
# UpdatePlanAtomicFlow
<a id="UpdatePlanAtomicFlow.UpdatePlanAtomicFlow"></a>
## UpdatePlanAtomicFlow Objects
```python
class UpdatePlanAtomicFlow(AtomicFlow)
```
This flow updates the plan file with the updated plan. The controller should pass the updated plan to this flow.
This design (controller reflect on the existing plan--update plan) is intended to let the controller more aware of the
plan it has. However one setback is that this process in then not deterministic.
*Input Interface*
- `updated_plan`: the updated plan in string format
*Output Interface*
- `result`: the result of the update plan operation
<a id="CoderFlow"></a>
# CoderFlow
<a id="CoderFlow.CoderFlow"></a>
## CoderFlow Objects
```python
class CoderFlow(AbstractBossFlow)
```
Coder flow is one executor branch of the Jarvis flow. At a higher level, it is a flow that
writes and runs code given a goal. In the Jarvis flow, the Coder flow in invoked by the controller,
The Coder flow receives the goal generated by the controller, writes and runs code in an interactive fashion.
The Coder flow has the similar structure as the Jarvis flow (inherited from AbstractBossFlow).
*Input Interface (expected input)*
- `goal` (str): The goal from the caller (source flow, i.e. JarvisFlow)
*Output Interface (expected output)*
- `result` (str): The result of the flow, the result will be returned to the caller (i.e. JarvisFlow).
- `summary` (str): The summary of the flow, the summary will be logged into the logs of the caller flow (i.e. JarvisFlow).
Typical workflow of Coder:
0. JarvisFlow calls Coder with a goal.
1. MemoryReading reads plans, logs and code library.
2. Planner makes plan based on goal.
3. Extend library with the goal given by the controller.
4. Run code with code (possibly calls the newly written function) given by the controller.
5. Finish and give an answer.
<a id="CoderFlow.CoderFlow.run"></a>
#### run
```python
def run(input_data: Dict[str, Any]) -> Dict[str, Any]
```
The run function of the Coder flow.
**Arguments**:
- `input_data` (`Dict[str, Any]`): The input data of the flow.
**Returns**:
`Dict[str, Any]`: The output data of the flow.
<a id="__init__"></a>
# \_\_init\_\_
|
Pray123/BERTlm | Pray123 | "2023-12-13T15:24:31Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2023-12-13T15:24:31Z" | ---
license: apache-2.0
---
|
Janhaci/bert-based | Janhaci | "2023-12-13T15:26:27Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-12-13T15:26:27Z" | Entry not found |
mehta77/dolly-lora | mehta77 | "2023-12-13T15:28:28Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:EleutherAI/gpt-j-6B",
"region:us"
] | null | "2023-12-13T15:28:26Z" | ---
library_name: peft
base_model: EleutherAI/gpt-j-6B
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0 |
seatond/NEW_REMn_rank16_lr2e-05_target8_epochs2_laplha32_wuratio0.125_wdecay0.25 | seatond | "2023-12-13T15:44:42Z" | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"arxiv:1910.09700",
"base_model:TheBloke/Mistral-7B-v0.1-GPTQ",
"region:us"
] | null | "2023-12-13T15:30:28Z" | ---
library_name: peft
base_model: TheBloke/Mistral-7B-v0.1-GPTQ
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1.dev0 |
seatond/NEW_REMall_rank16_lr1.5e-05_target8_epochs1_laplha32_wuratio0.125_wdecay0.25 | seatond | "2023-12-13T15:45:47Z" | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"arxiv:1910.09700",
"base_model:TheBloke/Mistral-7B-v0.1-GPTQ",
"region:us"
] | null | "2023-12-13T15:30:56Z" | ---
library_name: peft
base_model: TheBloke/Mistral-7B-v0.1-GPTQ
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1.dev0 |
lixtic/rare-puppers | lixtic | "2023-12-13T15:31:19Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"pytorch",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2023-12-13T15:31:13Z" | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: rare-puppers
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9850746393203735
---
# rare-puppers
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### Cat
![Cat](images/Cat.jpg)
#### Dog
![Dog](images/Dog.jpg)
#### Horse
![Horse](images/Horse.jpg)
#### lion
![lion](images/lion.jpg) |
jitu028/jitu028-ai-lab | jitu028 | "2023-12-13T15:33:09Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-12-13T15:31:44Z" | Entry not found |
revathyds31/donut-finetune-20sample-LB | revathyds31 | "2023-12-13T15:37:50Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"vision-encoder-decoder",
"endpoints_compatible",
"region:us"
] | null | "2023-12-13T15:37:07Z" | Entry not found |
hkivancoral/smids_3x_beit_base_sgd_001_fold5 | hkivancoral | "2023-12-14T01:40:55Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"beit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/beit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2023-12-13T15:39:24Z" | ---
license: apache-2.0
base_model: microsoft/beit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: smids_3x_beit_base_sgd_001_fold5
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8783333333333333
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_3x_beit_base_sgd_001_fold5
This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3105
- Accuracy: 0.8783
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.8516 | 1.0 | 225 | 0.8297 | 0.6267 |
| 0.6679 | 2.0 | 450 | 0.6103 | 0.7567 |
| 0.57 | 3.0 | 675 | 0.5223 | 0.7883 |
| 0.4959 | 4.0 | 900 | 0.4753 | 0.8083 |
| 0.4424 | 5.0 | 1125 | 0.4319 | 0.8233 |
| 0.4261 | 6.0 | 1350 | 0.4129 | 0.8283 |
| 0.4396 | 7.0 | 1575 | 0.4075 | 0.8167 |
| 0.4595 | 8.0 | 1800 | 0.3942 | 0.8267 |
| 0.4172 | 9.0 | 2025 | 0.3692 | 0.8367 |
| 0.3688 | 10.0 | 2250 | 0.3605 | 0.8583 |
| 0.4132 | 11.0 | 2475 | 0.3610 | 0.8417 |
| 0.369 | 12.0 | 2700 | 0.3465 | 0.8567 |
| 0.3672 | 13.0 | 2925 | 0.3443 | 0.8517 |
| 0.3409 | 14.0 | 3150 | 0.3437 | 0.855 |
| 0.2695 | 15.0 | 3375 | 0.3370 | 0.8567 |
| 0.311 | 16.0 | 3600 | 0.3373 | 0.8533 |
| 0.3177 | 17.0 | 3825 | 0.3325 | 0.8567 |
| 0.3059 | 18.0 | 4050 | 0.3310 | 0.8567 |
| 0.3295 | 19.0 | 4275 | 0.3271 | 0.8583 |
| 0.3201 | 20.0 | 4500 | 0.3301 | 0.8667 |
| 0.2645 | 21.0 | 4725 | 0.3242 | 0.8683 |
| 0.2497 | 22.0 | 4950 | 0.3240 | 0.8633 |
| 0.2626 | 23.0 | 5175 | 0.3196 | 0.8617 |
| 0.267 | 24.0 | 5400 | 0.3185 | 0.8733 |
| 0.2637 | 25.0 | 5625 | 0.3155 | 0.8733 |
| 0.3416 | 26.0 | 5850 | 0.3155 | 0.8783 |
| 0.3255 | 27.0 | 6075 | 0.3159 | 0.8767 |
| 0.3021 | 28.0 | 6300 | 0.3189 | 0.875 |
| 0.2292 | 29.0 | 6525 | 0.3137 | 0.8783 |
| 0.2207 | 30.0 | 6750 | 0.3185 | 0.8733 |
| 0.2158 | 31.0 | 6975 | 0.3173 | 0.8683 |
| 0.2149 | 32.0 | 7200 | 0.3154 | 0.87 |
| 0.248 | 33.0 | 7425 | 0.3134 | 0.8767 |
| 0.2339 | 34.0 | 7650 | 0.3133 | 0.875 |
| 0.2585 | 35.0 | 7875 | 0.3147 | 0.8767 |
| 0.2565 | 36.0 | 8100 | 0.3120 | 0.875 |
| 0.269 | 37.0 | 8325 | 0.3111 | 0.8783 |
| 0.2546 | 38.0 | 8550 | 0.3139 | 0.8733 |
| 0.2114 | 39.0 | 8775 | 0.3110 | 0.8767 |
| 0.2032 | 40.0 | 9000 | 0.3108 | 0.8767 |
| 0.2376 | 41.0 | 9225 | 0.3108 | 0.8783 |
| 0.2558 | 42.0 | 9450 | 0.3092 | 0.8767 |
| 0.2753 | 43.0 | 9675 | 0.3113 | 0.875 |
| 0.2795 | 44.0 | 9900 | 0.3109 | 0.8767 |
| 0.2412 | 45.0 | 10125 | 0.3113 | 0.8783 |
| 0.2003 | 46.0 | 10350 | 0.3105 | 0.88 |
| 0.2528 | 47.0 | 10575 | 0.3109 | 0.88 |
| 0.2265 | 48.0 | 10800 | 0.3109 | 0.8783 |
| 0.2494 | 49.0 | 11025 | 0.3106 | 0.8783 |
| 0.2763 | 50.0 | 11250 | 0.3105 | 0.8783 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
hkivancoral/smids_3x_beit_base_rms_0001_fold5 | hkivancoral | "2023-12-13T16:30:22Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"beit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/beit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2023-12-13T15:43:03Z" | ---
license: apache-2.0
base_model: microsoft/beit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: smids_3x_beit_base_rms_0001_fold5
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8133333333333334
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_3x_beit_base_rms_0001_fold5
This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9458
- Accuracy: 0.8133
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.8091 | 1.0 | 225 | 0.7888 | 0.5967 |
| 0.6825 | 2.0 | 450 | 0.6663 | 0.7 |
| 0.6331 | 3.0 | 675 | 0.6179 | 0.7233 |
| 0.545 | 4.0 | 900 | 0.5461 | 0.7667 |
| 0.4511 | 5.0 | 1125 | 0.5082 | 0.785 |
| 0.5168 | 6.0 | 1350 | 0.4960 | 0.7867 |
| 0.498 | 7.0 | 1575 | 0.4691 | 0.8133 |
| 0.4445 | 8.0 | 1800 | 0.5042 | 0.7883 |
| 0.3476 | 9.0 | 2025 | 0.5424 | 0.7883 |
| 0.3599 | 10.0 | 2250 | 0.4586 | 0.8183 |
| 0.2707 | 11.0 | 2475 | 0.5539 | 0.8133 |
| 0.3169 | 12.0 | 2700 | 0.5413 | 0.805 |
| 0.2566 | 13.0 | 2925 | 0.6318 | 0.7933 |
| 0.1731 | 14.0 | 3150 | 0.6551 | 0.805 |
| 0.1033 | 15.0 | 3375 | 0.7315 | 0.7967 |
| 0.1418 | 16.0 | 3600 | 0.6489 | 0.8217 |
| 0.1165 | 17.0 | 3825 | 0.8690 | 0.815 |
| 0.1219 | 18.0 | 4050 | 0.8038 | 0.7883 |
| 0.1084 | 19.0 | 4275 | 0.9593 | 0.795 |
| 0.0893 | 20.0 | 4500 | 0.9469 | 0.8 |
| 0.0691 | 21.0 | 4725 | 1.0155 | 0.8183 |
| 0.155 | 22.0 | 4950 | 1.0808 | 0.8033 |
| 0.068 | 23.0 | 5175 | 1.2932 | 0.8067 |
| 0.0871 | 24.0 | 5400 | 1.0549 | 0.7933 |
| 0.0614 | 25.0 | 5625 | 1.2073 | 0.8183 |
| 0.0426 | 26.0 | 5850 | 1.1147 | 0.8167 |
| 0.0501 | 27.0 | 6075 | 1.1794 | 0.8067 |
| 0.0496 | 28.0 | 6300 | 1.2384 | 0.8033 |
| 0.0951 | 29.0 | 6525 | 1.1319 | 0.795 |
| 0.0433 | 30.0 | 6750 | 1.1451 | 0.8217 |
| 0.0293 | 31.0 | 6975 | 1.3635 | 0.815 |
| 0.0095 | 32.0 | 7200 | 1.4313 | 0.8067 |
| 0.0383 | 33.0 | 7425 | 1.2822 | 0.8217 |
| 0.0173 | 34.0 | 7650 | 1.4012 | 0.8217 |
| 0.0213 | 35.0 | 7875 | 1.5178 | 0.8117 |
| 0.0022 | 36.0 | 8100 | 1.6408 | 0.8283 |
| 0.0311 | 37.0 | 8325 | 1.7017 | 0.8117 |
| 0.0285 | 38.0 | 8550 | 1.5903 | 0.8183 |
| 0.0027 | 39.0 | 8775 | 1.6460 | 0.815 |
| 0.0036 | 40.0 | 9000 | 1.6806 | 0.7983 |
| 0.0136 | 41.0 | 9225 | 1.5975 | 0.8067 |
| 0.0063 | 42.0 | 9450 | 1.8013 | 0.8217 |
| 0.0155 | 43.0 | 9675 | 1.6944 | 0.8083 |
| 0.0003 | 44.0 | 9900 | 1.5360 | 0.8283 |
| 0.0001 | 45.0 | 10125 | 1.7589 | 0.8283 |
| 0.0203 | 46.0 | 10350 | 1.7247 | 0.8267 |
| 0.0004 | 47.0 | 10575 | 1.7343 | 0.8217 |
| 0.0069 | 48.0 | 10800 | 1.8685 | 0.8183 |
| 0.0001 | 49.0 | 11025 | 1.9172 | 0.8133 |
| 0.0 | 50.0 | 11250 | 1.9458 | 0.8133 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
miweru/ochat3-5_schwurpus | miweru | "2023-12-16T10:16:17Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:openchat/openchat_3.5",
"region:us"
] | null | "2023-12-13T15:43:25Z" | ---
library_name: peft
base_model: openchat/openchat_3.5
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: QuantizationMethod.BITS_AND_BYTES
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.7.0 |
jolenechong/lora-bart-samsum-tib-1024 | jolenechong | "2023-12-13T15:49:35Z" | 0 | 0 | peft | [
"peft",
"bart",
"summarization",
"dataset:gigant/tib",
"base_model:philschmid/bart-large-cnn-samsum",
"license:mit",
"region:us"
] | summarization | "2023-12-13T15:43:36Z" | ---
license: mit
base_model: philschmid/bart-large-cnn-samsum
model-index:
- name: lora-bart-samsum-tib-1024
results: []
library_name: peft
datasets:
- gigant/tib
pipeline_tag: summarization
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lora-bart-samsum-tib-1024
This model is a fine-tuned version of [philschmid/bart-large-cnn-samsum](https://huggingface.co/philschmid/bart-large-cnn-samsum) on the TIB dataset.
## Model description
Fine Tuned with LORA on the TIB dataset.
A quick demo of it's capabilities:
```
Moderator: Good afternoon, everyone, and welcome to today's webinar on the fascinating and rapidly evolving topic of Artificial Intelligence. We have a distinguished panel of experts with us today who will shed light on the latest developments in AI and its impact on various aspects of our lives. I'll start by introducing our first speaker, Dr. Emily Rodriguez, a renowned AI researcher and professor.
Dr. Rodriguez: Thank you, it's a pleasure to be here. Artificial Intelligence has witnessed remarkable growth over the past few decades, and it's now ingrained in our daily lives, from voice assistants in our smartphones to self-driving cars and even in healthcare diagnostics. AI technologies are advancing at an unprecedented rate, driven by deep learning and neural networks. These innovations have allowed machines to perform tasks that were once thought to be exclusive to humans, such as natural language understanding, image recognition, and decision-making. The future of AI holds immense promise, but it also presents important ethical and societal challenges that we need to address.
Moderator: Indeed, the ethical aspect of AI is a crucial issue. Let's hear from our next speaker, Dr. James Chen, a pioneer in AI ethics.
Dr. Chen: Thank you for having me. As AI technologies continue to advance, it's essential that we consider the ethical implications. AI can perpetuate biases, invade privacy, and disrupt the job market. We must work collectively to ensure that AI is developed and deployed in a way that respects human rights, diversity, and transparency. Regulatory frameworks and ethical guidelines are crucial to navigate this evolving landscape and strike a balance between innovation and safeguarding societal values.
Moderator: Excellent points, Dr. Chen. Now, I'd like to turn to Dr. Sarah Patel, who has expertise in AI and its applications in healthcare.
Dr. Patel: Thank you. AI in healthcare is revolutionizing how we diagnose, treat, and manage diseases. Machine learning models can analyze vast datasets to predict disease outcomes and personalize treatment plans. It can improve the accuracy of medical imaging and reduce diagnostic errors. However, we must be cautious about data privacy and the need for responsible AI implementation in the healthcare sector. Ensuring data security and patient trust is essential for the successful integration of AI into healthcare systems.
Moderator: Thank you, Dr. Patel. Lastly, we have Dr. Michael Johnson, an expert in AI and its economic implications.
Dr. Johnson: AI is reshaping industries and economies worldwide. While it has the potential to boost productivity and drive economic growth, it also poses challenges in terms of job displacement and workforce adaptation. The role of governments, businesses, and educational institutions in upskilling and retraining the workforce is paramount. Additionally, fostering innovation and entrepreneurship in AI-related fields can create new opportunities and ensure a balanced and prosperous AI-driven economy.
Moderator: Thank you to all our speakers for their valuable insights on the multifaceted world of AI. It's clear that AI's impact on our society is immense, with profound implications across ethics, healthcare, and the economy. As we continue to advance, it is crucial that we remain vigilant and considerate of the ethical and societal dimensions, ensuring that AI remains a force for good. Thank you all for participating in this enlightening webinar
```
Is summarized as
```
Artificial Intelligence (AI) is a rapidly evolving technology that has profound implications for society, industry, and the economy. It has the potential to revolutionize many aspects of our lives, but it also presents important ethical and societal challenges that we need to address. In this webinar, we will hear from Dr. Emily Rodriguez, a renowned AI researcher and professor, Dr. James Chen, a pioneer in AI ethics, and Dr. Sarah Patel, an expert in AI and its applications in healthcare, who will discuss the ethical, societal, and economic implications of AI. Dr. Michael Johnson, a leading expert in the field of AI-related industries, will also discuss the economic implications.
```
## Intended uses & limitations
Intended for summarizing video conferences/webinars.
Try out the model with the code below :D
```python
from peft import PeftModel, PeftConfig
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
config = PeftConfig.from_pretrained("jolenechong/lora-bart-samsum-tib-1024")
model = AutoModelForSeq2SeqLM.from_pretrained("philschmid/bart-large-cnn-samsum")
model = PeftModel.from_pretrained(model, "jolenechong/lora-bart-samsum-tib-1024")
tokenizer = AutoTokenizer.from_pretrained("jolenechong/lora-bart-samsum-tib-1024", from_pt=True)
text = """[add transcript you want to summarize here]"""
inputs = tokenizer(text, return_tensors="pt")
with torch.no_grad():
outputs = model.generate(input_ids=inputs["input_ids"])
print(tokenizer.batch_decode(outputs.detach().cpu().numpy())[0])
```
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- PEFT 0.5.0
- PEFT 0.5.0
- Transformers 4.34.1
- Pytorch 2.1.0+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1 |
FeMatsu/BERT_Regression_Base | FeMatsu | "2023-12-13T22:38:13Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | "2023-12-13T15:44:08Z" | ---
license: mit
---
|
suncy13/graysacleAugmentedFoot | suncy13 | "2023-12-17T17:14:43Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"dinov2",
"image-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2023-12-13T15:48:28Z" | Entry not found |
NafishZaldinanda/whisper-small-id | NafishZaldinanda | "2023-12-14T07:37:02Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2023-12-13T15:51:49Z" | Entry not found |
sanghyo/FinalAssginment_model1 | sanghyo | "2023-12-14T14:35:52Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"marian",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2023-12-13T15:53:17Z" | Entry not found |
hamxea/Mistral-7B-v0.1-activity-fine-tuned-adapters-v4 | hamxea | "2023-12-13T15:59:31Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-v0.1",
"region:us"
] | null | "2023-12-13T15:58:15Z" | ---
library_name: peft
base_model: mistralai/Mistral-7B-v0.1
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0 |
hamxea/Mistral-7B-v0.1-activity-fine-tuned-v4 | hamxea | "2024-03-31T14:47:02Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"medical",
"text-generation-inference",
"en",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-v0.1",
"license:other",
"region:us"
] | null | "2023-12-13T15:59:59Z" | ---
library_name: peft
base_model: mistralai/Mistral-7B-v0.1
license: other
language:
- en
tags:
- medical
- text-generation-inference
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0 |
sh-zheng/pegasus-samsum | sh-zheng | "2023-12-14T00:42:12Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"dataset:samsum",
"base_model:google/pegasus-cnn_dailymail",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2023-12-13T16:00:14Z" | ---
base_model: google/pegasus-cnn_dailymail
tags:
- generated_from_trainer
datasets:
- samsum
model-index:
- name: pegasus-samsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-samsum
This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4861
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.6566 | 0.54 | 500 | 1.4861 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
booth-ai/rv20inpaintdegrow | booth-ai | "2024-01-04T14:41:37Z" | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"diffusers:StableDiffusionInpaintPipeline",
"region:us"
] | image-to-image | "2023-12-13T16:01:14Z" | Entry not found |
Hasnain12/CostAccounting | Hasnain12 | "2023-12-13T16:01:25Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-12-13T16:01:25Z" | Entry not found |
sofiapecora/wav2vec2-large-robust-12-ft-emotion-msp-dim-finetuned-gtzan | sofiapecora | "2023-12-13T19:04:01Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"audio-classification",
"endpoints_compatible",
"region:us"
] | audio-classification | "2023-12-13T16:02:26Z" | Entry not found |
marco27/Joe-station | marco27 | "2023-12-13T16:04:42Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2023-12-13T16:04:42Z" | ---
license: apache-2.0
---
|
lilmoinx/vit-demo | lilmoinx | "2023-12-13T16:06:48Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"vit",
"image-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2023-12-13T16:06:47Z" | Entry not found |
am-infoweb/rap_phase2_13dec_10i_v2 | am-infoweb | "2023-12-13T17:10:55Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"question-answering",
"generated_from_trainer",
"base_model:xlm-roberta-base",
"license:mit",
"endpoints_compatible",
"region:us"
] | question-answering | "2023-12-13T16:09:16Z" | ---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
model-index:
- name: rap_phase2_13dec_10i_v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rap_phase2_13dec_10i_v2
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0239
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.2572 | 1.0 | 4135 | 0.2366 |
| 0.11 | 2.0 | 8270 | 0.0804 |
| 0.1089 | 3.0 | 12405 | 0.0614 |
| 0.0276 | 4.0 | 16540 | 0.0430 |
| 0.0466 | 5.0 | 20675 | 0.0331 |
| 0.0221 | 6.0 | 24810 | 0.0230 |
| 0.0129 | 7.0 | 28945 | 0.0173 |
| 0.0094 | 8.0 | 33080 | 0.0248 |
| 0.0003 | 9.0 | 37215 | 0.0215 |
| 0.0 | 10.0 | 41350 | 0.0239 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
jane91/LLama2 | jane91 | "2023-12-13T16:09:28Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-12-13T16:09:28Z" | Entry not found |
mojuss/llama2 | mojuss | "2023-12-13T16:16:57Z" | 0 | 0 | null | [
"license:llama2",
"region:us"
] | null | "2023-12-13T16:16:57Z" | ---
license: llama2
---
|
jiajunzhu/comp561final | jiajunzhu | "2023-12-13T16:26:25Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2023-12-13T16:20:15Z" | Entry not found |
seba3y/whisper-tiny-accuracy | seba3y | "2023-12-13T16:22:49Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"whisper",
"audio-classification",
"endpoints_compatible",
"region:us"
] | audio-classification | "2023-12-13T16:22:42Z" | Entry not found |
vishwa27/flan-t5-large-mawpnli-calcx-nli-text-pt | vishwa27 | "2023-12-13T16:24:51Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2023-12-13T16:23:08Z" | Entry not found |
techandy42/decision-transformer-HalfCheetah-v3 | techandy42 | "2023-12-13T16:23:19Z" | 0 | 1 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"decision_transformer",
"generated_from_trainer",
"dataset:decision_transformer_gym_replay",
"endpoints_compatible",
"region:us"
] | null | "2023-12-13T16:23:17Z" | ---
tags:
- generated_from_trainer
datasets:
- decision_transformer_gym_replay
model-index:
- name: output
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# output
This model is a fine-tuned version of [](https://huggingface.co/) on the decision_transformer_gym_replay dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 120
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
Thekingbalxd/Mymodels | Thekingbalxd | "2023-12-13T16:25:51Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-12-13T16:25:51Z" | Entry not found |
Ruchita-debug/distilbert-base-uncased-lora-text-classification | Ruchita-debug | "2023-12-13T16:28:31Z" | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:distilbert-base-uncased",
"license:apache-2.0",
"region:us"
] | null | "2023-12-13T16:28:02Z" | ---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
metrics:
- accuracy
base_model: distilbert-base-uncased
model-index:
- name: distilbert-base-uncased-lora-text-classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-lora-text-classification
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3644
- Accuracy: {'accuracy': 0.858}
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:-------------------:|
| No log | 1.0 | 250 | 0.3793 | {'accuracy': 0.856} |
| 0.435 | 2.0 | 500 | 0.5190 | {'accuracy': 0.858} |
| 0.435 | 3.0 | 750 | 0.8326 | {'accuracy': 0.857} |
| 0.2005 | 4.0 | 1000 | 0.9137 | {'accuracy': 0.856} |
| 0.2005 | 5.0 | 1250 | 1.0362 | {'accuracy': 0.862} |
| 0.0827 | 6.0 | 1500 | 1.2331 | {'accuracy': 0.852} |
| 0.0827 | 7.0 | 1750 | 1.2110 | {'accuracy': 0.856} |
| 0.033 | 8.0 | 2000 | 1.2963 | {'accuracy': 0.864} |
| 0.033 | 9.0 | 2250 | 1.3438 | {'accuracy': 0.863} |
| 0.0128 | 10.0 | 2500 | 1.3644 | {'accuracy': 0.858} |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.0
- Pytorch 2.1.1+cpu
- Datasets 2.15.0
- Tokenizers 0.15.0 |
cwiz/art-kris | cwiz | "2023-12-13T16:28:22Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-12-13T16:28:22Z" | Entry not found |
cwiz/art_kris | cwiz | "2023-12-13T16:29:22Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-12-13T16:29:22Z" | Entry not found |
EthanRhys/Princess-Daisy | EthanRhys | "2023-12-13T16:31:12Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2023-12-13T16:29:58Z" | ---
license: openrail
---
|
BauyrjanQ/wav2vec2-large-mms-1b-kazakh-ksc2-2b-5ep_2nd | BauyrjanQ | "2023-12-15T13:36:25Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2023-12-13T16:30:05Z" | Entry not found |
Augusto777/vit-base-patch16-224-dmae-va-U | Augusto777 | "2023-12-13T16:57:21Z" | 0 | 1 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2023-12-13T16:30:18Z" | ---
license: apache-2.0
base_model: google/vit-base-patch16-224
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vit-base-patch16-224-dmae-va-U
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-dmae-va-U
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0534
- Accuracy: 0.9908
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 40
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.9 | 7 | 1.4319 | 0.2569 |
| 1.3911 | 1.94 | 15 | 1.2133 | 0.4771 |
| 1.3911 | 2.97 | 23 | 0.9487 | 0.6055 |
| 1.0766 | 4.0 | 31 | 0.6542 | 0.7156 |
| 0.6974 | 4.9 | 38 | 0.4644 | 0.8716 |
| 0.6974 | 5.94 | 46 | 0.3919 | 0.8716 |
| 0.421 | 6.97 | 54 | 0.3094 | 0.8716 |
| 0.2513 | 8.0 | 62 | 0.2334 | 0.8991 |
| 0.2513 | 8.9 | 69 | 0.1915 | 0.9174 |
| 0.1931 | 9.94 | 77 | 0.2431 | 0.8807 |
| 0.1757 | 10.97 | 85 | 0.1608 | 0.9450 |
| 0.1757 | 12.0 | 93 | 0.1424 | 0.9266 |
| 0.1442 | 12.9 | 100 | 0.1280 | 0.9450 |
| 0.1085 | 13.94 | 108 | 0.1055 | 0.9541 |
| 0.1085 | 14.97 | 116 | 0.1080 | 0.9541 |
| 0.1056 | 16.0 | 124 | 0.0997 | 0.9633 |
| 0.1056 | 16.9 | 131 | 0.1185 | 0.9633 |
| 0.0926 | 17.94 | 139 | 0.0773 | 0.9633 |
| 0.103 | 18.97 | 147 | 0.1279 | 0.9633 |
| 0.103 | 20.0 | 155 | 0.1043 | 0.9633 |
| 0.0938 | 20.9 | 162 | 0.0824 | 0.9817 |
| 0.0891 | 21.94 | 170 | 0.1449 | 0.9541 |
| 0.0891 | 22.97 | 178 | 0.1366 | 0.9633 |
| 0.0754 | 24.0 | 186 | 0.1148 | 0.9358 |
| 0.0882 | 24.9 | 193 | 0.1992 | 0.9358 |
| 0.0882 | 25.94 | 201 | 0.0743 | 0.9817 |
| 0.078 | 26.97 | 209 | 0.0668 | 0.9725 |
| 0.0666 | 28.0 | 217 | 0.0534 | 0.9908 |
| 0.0666 | 28.9 | 224 | 0.0499 | 0.9908 |
| 0.0514 | 29.94 | 232 | 0.0433 | 0.9725 |
| 0.062 | 30.97 | 240 | 0.0840 | 0.9633 |
| 0.062 | 32.0 | 248 | 0.0513 | 0.9725 |
| 0.0712 | 32.9 | 255 | 0.0482 | 0.9817 |
| 0.0712 | 33.94 | 263 | 0.0553 | 0.9817 |
| 0.0703 | 34.97 | 271 | 0.0602 | 0.9725 |
| 0.0553 | 36.0 | 279 | 0.0595 | 0.9725 |
| 0.0553 | 36.13 | 280 | 0.0595 | 0.9725 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
Arekku21/vivit-b-16x2-kinetics400-finetuned-MSL_40_classes_4 | Arekku21 | "2023-12-13T20:26:30Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vivit",
"video-classification",
"endpoints_compatible",
"region:us"
] | video-classification | "2023-12-13T16:36:14Z" | Entry not found |
Ubaid000/game | Ubaid000 | "2023-12-13T16:38:06Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-12-13T16:38:06Z" | Entry not found |
hkivancoral/smids_3x_beit_base_sgd_0001_fold1 | hkivancoral | "2023-12-13T17:27:20Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"beit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/beit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2023-12-13T16:40:05Z" | ---
license: apache-2.0
base_model: microsoft/beit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: smids_3x_beit_base_sgd_0001_fold1
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.7512520868113522
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_3x_beit_base_sgd_0001_fold1
This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6006
- Accuracy: 0.7513
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.2428 | 1.0 | 226 | 1.2900 | 0.3139 |
| 1.1165 | 2.0 | 452 | 1.2280 | 0.3389 |
| 1.085 | 3.0 | 678 | 1.1717 | 0.3606 |
| 1.0873 | 4.0 | 904 | 1.1174 | 0.3973 |
| 1.0209 | 5.0 | 1130 | 1.0648 | 0.4207 |
| 0.9387 | 6.0 | 1356 | 1.0163 | 0.4741 |
| 0.9347 | 7.0 | 1582 | 0.9719 | 0.5175 |
| 0.8727 | 8.0 | 1808 | 0.9312 | 0.5626 |
| 0.8169 | 9.0 | 2034 | 0.8951 | 0.5993 |
| 0.861 | 10.0 | 2260 | 0.8623 | 0.6160 |
| 0.8138 | 11.0 | 2486 | 0.8344 | 0.6327 |
| 0.7635 | 12.0 | 2712 | 0.8096 | 0.6444 |
| 0.7469 | 13.0 | 2938 | 0.7879 | 0.6477 |
| 0.7457 | 14.0 | 3164 | 0.7697 | 0.6561 |
| 0.6958 | 15.0 | 3390 | 0.7527 | 0.6728 |
| 0.6961 | 16.0 | 3616 | 0.7374 | 0.6795 |
| 0.6436 | 17.0 | 3842 | 0.7245 | 0.6878 |
| 0.6513 | 18.0 | 4068 | 0.7127 | 0.6912 |
| 0.6672 | 19.0 | 4294 | 0.7016 | 0.6962 |
| 0.6558 | 20.0 | 4520 | 0.6918 | 0.7012 |
| 0.6466 | 21.0 | 4746 | 0.6834 | 0.7028 |
| 0.6561 | 22.0 | 4972 | 0.6751 | 0.7045 |
| 0.6208 | 23.0 | 5198 | 0.6670 | 0.7145 |
| 0.6499 | 24.0 | 5424 | 0.6602 | 0.7162 |
| 0.6316 | 25.0 | 5650 | 0.6537 | 0.7179 |
| 0.6488 | 26.0 | 5876 | 0.6486 | 0.7245 |
| 0.6013 | 27.0 | 6102 | 0.6431 | 0.7229 |
| 0.6349 | 28.0 | 6328 | 0.6385 | 0.7295 |
| 0.5571 | 29.0 | 6554 | 0.6343 | 0.7312 |
| 0.6883 | 30.0 | 6780 | 0.6303 | 0.7329 |
| 0.5874 | 31.0 | 7006 | 0.6269 | 0.7362 |
| 0.5957 | 32.0 | 7232 | 0.6236 | 0.7412 |
| 0.5454 | 33.0 | 7458 | 0.6209 | 0.7446 |
| 0.5392 | 34.0 | 7684 | 0.6182 | 0.7446 |
| 0.6014 | 35.0 | 7910 | 0.6160 | 0.7462 |
| 0.5394 | 36.0 | 8136 | 0.6140 | 0.7462 |
| 0.5557 | 37.0 | 8362 | 0.6119 | 0.7479 |
| 0.5868 | 38.0 | 8588 | 0.6101 | 0.7479 |
| 0.5673 | 39.0 | 8814 | 0.6084 | 0.7479 |
| 0.5576 | 40.0 | 9040 | 0.6071 | 0.7479 |
| 0.5598 | 41.0 | 9266 | 0.6057 | 0.7479 |
| 0.5493 | 42.0 | 9492 | 0.6045 | 0.7496 |
| 0.573 | 43.0 | 9718 | 0.6035 | 0.7513 |
| 0.5428 | 44.0 | 9944 | 0.6027 | 0.7513 |
| 0.6174 | 45.0 | 10170 | 0.6020 | 0.7513 |
| 0.5654 | 46.0 | 10396 | 0.6015 | 0.7513 |
| 0.5911 | 47.0 | 10622 | 0.6010 | 0.7513 |
| 0.5644 | 48.0 | 10848 | 0.6008 | 0.7513 |
| 0.5284 | 49.0 | 11074 | 0.6007 | 0.7513 |
| 0.5888 | 50.0 | 11300 | 0.6006 | 0.7513 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
hkivancoral/smids_3x_beit_base_rms_001_fold1 | hkivancoral | "2023-12-13T17:29:40Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"beit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/beit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2023-12-13T16:41:12Z" | ---
license: apache-2.0
base_model: microsoft/beit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: smids_3x_beit_base_rms_001_fold1
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.7579298831385642
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_3x_beit_base_rms_001_fold1
This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7807
- Accuracy: 0.7579
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.1042 | 1.0 | 226 | 1.0981 | 0.3456 |
| 0.942 | 2.0 | 452 | 0.9022 | 0.5275 |
| 0.8328 | 3.0 | 678 | 0.9600 | 0.4691 |
| 0.8702 | 4.0 | 904 | 0.9083 | 0.5543 |
| 0.8313 | 5.0 | 1130 | 0.8171 | 0.5760 |
| 0.8558 | 6.0 | 1356 | 0.8467 | 0.5342 |
| 0.7514 | 7.0 | 1582 | 0.7612 | 0.6277 |
| 0.7839 | 8.0 | 1808 | 0.7968 | 0.5659 |
| 0.7602 | 9.0 | 2034 | 0.7655 | 0.6210 |
| 0.7694 | 10.0 | 2260 | 0.7429 | 0.6060 |
| 0.7166 | 11.0 | 2486 | 0.7968 | 0.5626 |
| 0.7048 | 12.0 | 2712 | 0.8272 | 0.6077 |
| 0.6745 | 13.0 | 2938 | 0.8054 | 0.5993 |
| 0.7185 | 14.0 | 3164 | 0.7867 | 0.6194 |
| 0.7264 | 15.0 | 3390 | 0.7701 | 0.6377 |
| 0.6767 | 16.0 | 3616 | 0.7383 | 0.6144 |
| 0.6006 | 17.0 | 3842 | 0.8677 | 0.6077 |
| 0.6721 | 18.0 | 4068 | 0.7460 | 0.6361 |
| 0.6352 | 19.0 | 4294 | 0.7492 | 0.6127 |
| 0.642 | 20.0 | 4520 | 0.7712 | 0.6160 |
| 0.6647 | 21.0 | 4746 | 0.7257 | 0.6544 |
| 0.6408 | 22.0 | 4972 | 0.7629 | 0.6611 |
| 0.7655 | 23.0 | 5198 | 0.7723 | 0.6127 |
| 0.7074 | 24.0 | 5424 | 0.6879 | 0.6928 |
| 0.6919 | 25.0 | 5650 | 0.6962 | 0.6828 |
| 0.698 | 26.0 | 5876 | 0.7479 | 0.6361 |
| 0.641 | 27.0 | 6102 | 0.7653 | 0.6644 |
| 0.6417 | 28.0 | 6328 | 0.7791 | 0.6594 |
| 0.6123 | 29.0 | 6554 | 0.7195 | 0.6761 |
| 0.5918 | 30.0 | 6780 | 0.6991 | 0.6995 |
| 0.5562 | 31.0 | 7006 | 0.6938 | 0.6978 |
| 0.6293 | 32.0 | 7232 | 0.6564 | 0.7145 |
| 0.5615 | 33.0 | 7458 | 0.7421 | 0.6878 |
| 0.5411 | 34.0 | 7684 | 0.6688 | 0.7145 |
| 0.4483 | 35.0 | 7910 | 0.7701 | 0.6962 |
| 0.4776 | 36.0 | 8136 | 0.6349 | 0.7412 |
| 0.4775 | 37.0 | 8362 | 0.6430 | 0.7262 |
| 0.4854 | 38.0 | 8588 | 0.7095 | 0.7078 |
| 0.4208 | 39.0 | 8814 | 0.6254 | 0.7412 |
| 0.3846 | 40.0 | 9040 | 0.6645 | 0.7396 |
| 0.3663 | 41.0 | 9266 | 0.6430 | 0.7563 |
| 0.3616 | 42.0 | 9492 | 0.6767 | 0.7462 |
| 0.3863 | 43.0 | 9718 | 0.6432 | 0.7596 |
| 0.2608 | 44.0 | 9944 | 0.6472 | 0.7563 |
| 0.4159 | 45.0 | 10170 | 0.6400 | 0.7663 |
| 0.333 | 46.0 | 10396 | 0.6911 | 0.7613 |
| 0.2718 | 47.0 | 10622 | 0.7119 | 0.7696 |
| 0.2639 | 48.0 | 10848 | 0.7421 | 0.7596 |
| 0.236 | 49.0 | 11074 | 0.7543 | 0.7613 |
| 0.2672 | 50.0 | 11300 | 0.7807 | 0.7579 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|