modelId
stringlengths 5
122
| author
stringlengths 2
42
| last_modified
unknown | downloads
int64 0
738M
| likes
int64 0
11k
| library_name
stringclasses 245
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 48
values | createdAt
unknown | card
stringlengths 1
901k
|
---|---|---|---|---|---|---|---|---|---|
Jubliano/wav2vec2-large-xls-r-300m-ipa-INTERNATIONAL1.9.2WithoutSpaces | Jubliano | "2024-06-11T18:40:52Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-06-10T11:31:08Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
talkchief/distil-whisper_distil-large-v3 | talkchief | "2024-06-10T11:32:27Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-10T11:32:27Z" | Entry not found |
Naturen/Naturen | Naturen | "2024-06-10T11:33:30Z" | 0 | 0 | null | [
"arxiv:1910.09700",
"region:us"
] | null | "2024-06-10T11:32:31Z" | ---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
kevin009/llamamathv7 | kevin009 | "2024-06-10T12:37:36Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-06-10T11:32:37Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
---
# Uploaded model
- **Developed by:** kevin009
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
talkchief/openai_whisper-large-v2 | talkchief | "2024-06-10T11:32:55Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-10T11:32:54Z" | Entry not found |
sounana/large | sounana | "2024-06-10T14:44:09Z" | 0 | 0 | null | [
"tensorboard",
"safetensors",
"region:us"
] | null | "2024-06-10T11:35:51Z" | Entry not found |
AdamRTomkins/test_upload | AdamRTomkins | "2024-06-10T14:44:00Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"phi",
"axolotl",
"generated_from_trainer",
"base_model:microsoft/phi-1_5",
"license:mit",
"4-bit",
"bitsandbytes",
"region:us"
] | null | "2024-06-10T11:37:51Z" | ---
license: mit
library_name: peft
tags:
- axolotl
- generated_from_trainer
base_model: microsoft/phi-1_5
model-index:
- name: test_upload
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adam_beta2: 0.95
adam_epsilon: 1.0e-05
adapter: qlora
base_model: microsoft/phi-1_5
dataset_prepared_path: null
datasets:
- path: garage-bAInd/Open-Platypus
type: alpaca
debug: null
deepspeed: null
early_stopping_patience: null
evals_per_epoch: 1
flash_attention: true
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_checkpointing_kwargs:
use_reentrant: true
hub_model_id: AdamRTomkins/test_upload
hub_strategy: end
learning_rate: 3.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_steps: 2
micro_batch_size: 1
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: ./outputs/phi-sft-out
pad_to_sequence_len: true
resize_token_embeddings_to_32x: true
resume_from_checkpoint: null
sample_packing: true
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: <|endoftext|>
strict: false
tokenizer_type: AutoTokenizer
val_set_size: 0.05
wandb_entity: null
wandb_log_model: null
wandb_name: null
wandb_project: null
wandb_watch: null
warmup_steps: 100
weight_decay: 0.1
xformers_attention: null
```
</details><br>
# test_upload
This model is a fine-tuned version of [microsoft/phi-1_5](https://huggingface.co/microsoft/phi-1_5) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3469
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-06
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-05
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.6676 | 0.0002 | 2 | 1.3469 |
### Framework versions
- PEFT 0.11.1
- Transformers 4.41.1
- Pytorch 2.1.2+cu118
- Datasets 2.19.1
- Tokenizers 0.19.1 |
vintage-lavender619/vit-base-patch16-224-finalterm | vintage-lavender619 | "2024-06-10T12:03:25Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2024-06-10T11:38:36Z" | ---
license: apache-2.0
base_model: google/vit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: vit-base-patch16-224-finalterm
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.88125
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-finalterm
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3547
- Accuracy: 0.8812
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.3999 | 1.0 | 10 | 1.1607 | 0.5094 |
| 0.993 | 2.0 | 20 | 0.7807 | 0.7031 |
| 0.6819 | 3.0 | 30 | 0.5753 | 0.8063 |
| 0.5485 | 4.0 | 40 | 0.6475 | 0.7594 |
| 0.463 | 5.0 | 50 | 0.4393 | 0.8406 |
| 0.3929 | 6.0 | 60 | 0.4067 | 0.8625 |
| 0.3636 | 7.0 | 70 | 0.3626 | 0.8875 |
| 0.3719 | 8.0 | 80 | 0.3613 | 0.8875 |
| 0.343 | 9.0 | 90 | 0.3624 | 0.8781 |
| 0.3297 | 10.0 | 100 | 0.3800 | 0.8625 |
| 0.2948 | 11.0 | 110 | 0.3320 | 0.8938 |
| 0.33 | 12.0 | 120 | 0.3481 | 0.8781 |
| 0.3281 | 13.0 | 130 | 0.3418 | 0.8875 |
| 0.3 | 14.0 | 140 | 0.3425 | 0.8844 |
| 0.3014 | 15.0 | 150 | 0.3547 | 0.8812 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.2
- Tokenizers 0.19.1
|
thangduong0509/blip_vivqa_finetuned_200 | thangduong0509 | "2024-06-10T11:41:13Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-10T11:41:13Z" | Entry not found |
tsavage68/UTI2_M2_1000steps_1e7rate_CSFTDPO | tsavage68 | "2024-06-10T11:46:50Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"trl",
"dpo",
"generated_from_trainer",
"conversational",
"base_model:tsavage68/UTI_M2_1000steps_1e7rate_SFT",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-06-10T11:42:51Z" | ---
license: apache-2.0
base_model: tsavage68/UTI_M2_1000steps_1e7rate_SFT
tags:
- trl
- dpo
- generated_from_trainer
model-index:
- name: UTI2_M2_1000steps_1e7rate_CSFTDPO
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# UTI2_M2_1000steps_1e7rate_CSFTDPO
This model is a fine-tuned version of [tsavage68/UTI_M2_1000steps_1e7rate_SFT](https://huggingface.co/tsavage68/UTI_M2_1000steps_1e7rate_SFT) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5546
- Rewards/chosen: 0.0422
- Rewards/rejected: -0.2698
- Rewards/accuracies: 0.8600
- Rewards/margins: 0.3120
- Logps/rejected: -39.8957
- Logps/chosen: -19.8371
- Logits/rejected: -2.6809
- Logits/chosen: -2.6783
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-08
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.6931 | 0.3333 | 25 | 0.6921 | 0.0031 | 0.0001 | 0.1900 | 0.0030 | -39.3560 | -19.9153 | -2.6832 | -2.6806 |
| 0.6879 | 0.6667 | 50 | 0.6797 | 0.0157 | -0.0141 | 0.5300 | 0.0298 | -39.3843 | -19.8902 | -2.6825 | -2.6799 |
| 0.7047 | 1.0 | 75 | 0.6833 | 0.0064 | -0.0173 | 0.5300 | 0.0237 | -39.3907 | -19.9087 | -2.6821 | -2.6796 |
| 0.6925 | 1.3333 | 100 | 0.6719 | 0.0150 | -0.0314 | 0.6200 | 0.0464 | -39.4189 | -19.8915 | -2.6833 | -2.6807 |
| 0.6674 | 1.6667 | 125 | 0.6638 | 0.0030 | -0.0600 | 0.6900 | 0.0630 | -39.4762 | -19.9155 | -2.6817 | -2.6791 |
| 0.6591 | 2.0 | 150 | 0.6356 | 0.0148 | -0.1082 | 0.8100 | 0.1230 | -39.5726 | -19.8920 | -2.6816 | -2.6790 |
| 0.637 | 2.3333 | 175 | 0.6319 | 0.0113 | -0.1196 | 0.8200 | 0.1309 | -39.5954 | -19.8989 | -2.6812 | -2.6786 |
| 0.6179 | 2.6667 | 200 | 0.6054 | 0.0342 | -0.1567 | 0.8300 | 0.1909 | -39.6696 | -19.8532 | -2.6821 | -2.6795 |
| 0.6173 | 3.0 | 225 | 0.6032 | 0.0393 | -0.1577 | 0.8200 | 0.1970 | -39.6716 | -19.8429 | -2.6816 | -2.6790 |
| 0.5873 | 3.3333 | 250 | 0.5858 | 0.0189 | -0.2169 | 0.8400 | 0.2358 | -39.7899 | -19.8837 | -2.6812 | -2.6786 |
| 0.5795 | 3.6667 | 275 | 0.5877 | 0.0141 | -0.2185 | 0.8000 | 0.2326 | -39.7932 | -19.8934 | -2.6813 | -2.6787 |
| 0.6008 | 4.0 | 300 | 0.5756 | 0.0356 | -0.2244 | 0.8400 | 0.2600 | -39.8049 | -19.8503 | -2.6803 | -2.6777 |
| 0.57 | 4.3333 | 325 | 0.5764 | 0.0262 | -0.2323 | 0.8400 | 0.2585 | -39.8208 | -19.8692 | -2.6807 | -2.6781 |
| 0.5584 | 4.6667 | 350 | 0.5605 | 0.0242 | -0.2723 | 0.8600 | 0.2964 | -39.9007 | -19.8732 | -2.6802 | -2.6776 |
| 0.572 | 5.0 | 375 | 0.5604 | 0.0279 | -0.2703 | 0.8700 | 0.2982 | -39.8968 | -19.8658 | -2.6804 | -2.6778 |
| 0.5811 | 5.3333 | 400 | 0.5617 | 0.0342 | -0.2607 | 0.8500 | 0.2949 | -39.8776 | -19.8531 | -2.6798 | -2.6772 |
| 0.5751 | 5.6667 | 425 | 0.5648 | 0.0392 | -0.2472 | 0.8600 | 0.2865 | -39.8506 | -19.8431 | -2.6809 | -2.6783 |
| 0.561 | 6.0 | 450 | 0.5624 | 0.0124 | -0.2803 | 0.8500 | 0.2927 | -39.9167 | -19.8967 | -2.6806 | -2.6781 |
| 0.545 | 6.3333 | 475 | 0.5525 | 0.0448 | -0.2732 | 0.8700 | 0.3180 | -39.9025 | -19.8319 | -2.6815 | -2.6789 |
| 0.6125 | 6.6667 | 500 | 0.5589 | 0.0463 | -0.2561 | 0.8700 | 0.3023 | -39.8683 | -19.8290 | -2.6811 | -2.6785 |
| 0.5398 | 7.0 | 525 | 0.5612 | 0.0214 | -0.2753 | 0.8400 | 0.2966 | -39.9067 | -19.8788 | -2.6805 | -2.6779 |
| 0.543 | 7.3333 | 550 | 0.5643 | 0.0400 | -0.2494 | 0.8500 | 0.2894 | -39.8549 | -19.8415 | -2.6806 | -2.6781 |
| 0.5541 | 7.6667 | 575 | 0.5616 | 0.0247 | -0.2721 | 0.8500 | 0.2968 | -39.9002 | -19.8720 | -2.6813 | -2.6788 |
| 0.5576 | 8.0 | 600 | 0.5650 | 0.0122 | -0.2764 | 0.8500 | 0.2886 | -39.9089 | -19.8971 | -2.6812 | -2.6786 |
| 0.5543 | 8.3333 | 625 | 0.5605 | 0.0330 | -0.2649 | 0.8600 | 0.2980 | -39.8860 | -19.8555 | -2.6809 | -2.6783 |
| 0.5405 | 8.6667 | 650 | 0.5648 | 0.0146 | -0.2732 | 0.8500 | 0.2878 | -39.9025 | -19.8924 | -2.6810 | -2.6784 |
| 0.5535 | 9.0 | 675 | 0.5536 | 0.0354 | -0.2789 | 0.8500 | 0.3143 | -39.9140 | -19.8507 | -2.6798 | -2.6772 |
| 0.5292 | 9.3333 | 700 | 0.5534 | 0.0444 | -0.2708 | 0.8600 | 0.3152 | -39.8978 | -19.8328 | -2.6808 | -2.6782 |
| 0.5718 | 9.6667 | 725 | 0.5556 | 0.0429 | -0.2668 | 0.8400 | 0.3097 | -39.8898 | -19.8358 | -2.6813 | -2.6787 |
| 0.585 | 10.0 | 750 | 0.5512 | 0.0392 | -0.2799 | 0.8800 | 0.3191 | -39.9159 | -19.8431 | -2.6809 | -2.6783 |
| 0.5609 | 10.3333 | 775 | 0.5540 | 0.0352 | -0.2800 | 0.8600 | 0.3152 | -39.9161 | -19.8511 | -2.6808 | -2.6782 |
| 0.5572 | 10.6667 | 800 | 0.5500 | 0.0424 | -0.2816 | 0.8700 | 0.3240 | -39.9193 | -19.8367 | -2.6809 | -2.6783 |
| 0.5514 | 11.0 | 825 | 0.5541 | 0.0433 | -0.2698 | 0.8700 | 0.3131 | -39.8958 | -19.8350 | -2.6809 | -2.6783 |
| 0.5467 | 11.3333 | 850 | 0.5546 | 0.0422 | -0.2698 | 0.8600 | 0.3120 | -39.8957 | -19.8371 | -2.6809 | -2.6783 |
| 0.5803 | 11.6667 | 875 | 0.5546 | 0.0422 | -0.2698 | 0.8600 | 0.3120 | -39.8957 | -19.8371 | -2.6809 | -2.6783 |
| 0.5514 | 12.0 | 900 | 0.5546 | 0.0422 | -0.2698 | 0.8600 | 0.3120 | -39.8957 | -19.8371 | -2.6809 | -2.6783 |
| 0.5579 | 12.3333 | 925 | 0.5546 | 0.0422 | -0.2698 | 0.8600 | 0.3120 | -39.8957 | -19.8371 | -2.6809 | -2.6783 |
| 0.5599 | 12.6667 | 950 | 0.5546 | 0.0422 | -0.2698 | 0.8600 | 0.3120 | -39.8957 | -19.8371 | -2.6809 | -2.6783 |
| 0.5609 | 13.0 | 975 | 0.5546 | 0.0422 | -0.2698 | 0.8600 | 0.3120 | -39.8957 | -19.8371 | -2.6809 | -2.6783 |
| 0.552 | 13.3333 | 1000 | 0.5546 | 0.0422 | -0.2698 | 0.8600 | 0.3120 | -39.8957 | -19.8371 | -2.6809 | -2.6783 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.0.0+cu117
- Datasets 2.19.2
- Tokenizers 0.19.1
|
yaraksen/komod_no_sftmx_2_6 | yaraksen | "2024-06-10T11:52:48Z" | 0 | 0 | null | [
"safetensors",
"region:us"
] | null | "2024-06-10T11:46:15Z" | Entry not found |
alexgrigore/videomae-base-finetuned-good-gestureUnitV11 | alexgrigore | "2024-06-10T11:50:53Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"videomae",
"video-classification",
"generated_from_trainer",
"base_model:MCG-NJU/videomae-base",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | video-classification | "2024-06-10T11:46:54Z" | ---
license: cc-by-nc-4.0
base_model: MCG-NJU/videomae-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: videomae-base-finetuned-good-gestureUnitV11
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-base-finetuned-good-gestureUnitV11
This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Accuracy: 0.9310
- Loss: 0.3025
- Accuracy Gunit: 0.8571
- Accuracy Nothing: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 80
### Training results
| Training Loss | Epoch | Step | Accuracy | Validation Loss | Accuracy Gunit | Accuracy Nothing |
|:-------------:|:------:|:----:|:--------:|:---------------:|:--------------:|:----------------:|
| 0.6685 | 0.2125 | 17 | 0.6071 | 0.6297 | 1.0 | 0.2903 |
| 0.5348 | 1.2125 | 34 | 0.8214 | 0.4292 | 0.6 | 1.0 |
| 0.3395 | 2.2125 | 51 | 0.7857 | 0.4852 | 0.88 | 0.7097 |
| 0.3607 | 3.2125 | 68 | 0.8214 | 0.4507 | 0.6 | 1.0 |
| 0.2436 | 4.15 | 80 | 0.7857 | 0.4433 | 0.76 | 0.8065 |
### Framework versions
- Transformers 4.41.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
manbeast3b/KinoInferlol2 | manbeast3b | "2024-06-10T11:47:14Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-10T11:47:06Z" | Entry not found |
GarciaDos/ppo-Huggy2 | GarciaDos | "2024-06-10T11:49:40Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-10T11:49:40Z" | Entry not found |
Kigo1974/KigoGrader-1.0 | Kigo1974 | "2024-06-10T11:50:24Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-10T11:50:24Z" | Entry not found |
danigambit/test_1006 | danigambit | "2024-06-10T11:51:00Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-10T11:51:00Z" | Entry not found |
jnalwa/auto | jnalwa | "2024-06-10T11:52:29Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-10T11:52:29Z" | Entry not found |
Attaboi/my_awesome_model | Attaboi | "2024-06-10T11:52:43Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-10T11:52:43Z" | Entry not found |
yaraksen/komod_no_sftmx_4_3 | yaraksen | "2024-06-10T12:00:02Z" | 0 | 0 | null | [
"safetensors",
"region:us"
] | null | "2024-06-10T11:54:33Z" | Entry not found |
Jakh0103/new_llama3-8b_mcq_rag | Jakh0103 | "2024-06-10T12:01:30Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-06-10T11:55:19Z" | Entry not found |
tranthaihoa/llama3_evidence | tranthaihoa | "2024-06-10T11:55:38Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-10T11:55:21Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** tranthaihoa
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
danigambit/testxx | danigambit | "2024-06-10T11:55:34Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-10T11:55:34Z" | Entry not found |
longlivebigcat/hunheNew_qiwen7b_alp_lora2400_model | longlivebigcat | "2024-06-10T11:57:41Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"base_model:unsloth/Qwen2-7B-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-10T11:57:28Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
base_model: unsloth/Qwen2-7B-bnb-4bit
---
# Uploaded model
- **Developed by:** longlivebigcat
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2-7B-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
AshiqaSameem/gemma_biology_summarizer_model | AshiqaSameem | "2024-06-10T12:08:10Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:unsloth/gemma-7b-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-06-10T12:02:28Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- gemma
- trl
- sft
base_model: unsloth/gemma-7b-bnb-4bit
---
# Uploaded model
- **Developed by:** AshiqaSameem
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-7b-bnb-4bit
This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Mirman619/Realistic_3D_rendering_of_girls | Mirman619 | "2024-06-10T12:04:37Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2024-06-10T12:03:00Z" | ---
license: openrail
---
|
kartikay101/wtimit-base-960h-normal-reduced-learning-rate-all | kartikay101 | "2024-06-11T07:29:32Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:facebook/wav2vec2-base-960h",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-06-10T12:06:37Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
base_model: facebook/wav2vec2-base-960h
metrics:
- wer
model-index:
- name: wtimit-base-960h-normal-reduced-learning-rate-all
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wtimit-base-960h-normal-reduced-learning-rate-all
This model is a fine-tuned version of [facebook/wav2vec2-base-960h](https://huggingface.co/facebook/wav2vec2-base-960h) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3181
- Wer: 0.2132
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-------:|:----:|:---------------:|:------:|
| 0.4297 | 2.1552 | 1000 | 0.3046 | 0.2440 |
| 0.3137 | 4.3103 | 2000 | 0.2941 | 0.2240 |
| 0.2578 | 6.4655 | 3000 | 0.2982 | 0.2176 |
| 0.2153 | 8.6207 | 4000 | 0.3063 | 0.2166 |
| 0.1998 | 10.7759 | 5000 | 0.3036 | 0.2155 |
| 0.1913 | 12.9310 | 6000 | 0.3049 | 0.2122 |
| 0.1836 | 15.0862 | 7000 | 0.3160 | 0.2161 |
| 0.1755 | 17.2414 | 8000 | 0.3192 | 0.2152 |
| 0.1681 | 19.3966 | 9000 | 0.3181 | 0.2132 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.2
- Tokenizers 0.19.1
|
LinearizedLLM/llama-2-7b-grouped-linear | LinearizedLLM | "2024-06-10T12:23:20Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-06-10T12:07:39Z" | ---
license: llama2
language:
- en
--- |
MFF212/alva | MFF212 | "2024-06-10T12:09:46Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-10T12:09:10Z" | Entry not found |
aleoaaaa/t5-base-fr-sum-cnndm_finetuned_10_06_14_09 | aleoaaaa | "2024-06-10T12:09:42Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-10T12:09:42Z" | Entry not found |
LinearizedLLM/llama-2-7b-medusa-head-grouped-linear | LinearizedLLM | "2024-06-10T12:11:58Z" | 0 | 0 | transformers | [
"transformers",
"endpoints_compatible",
"region:us"
] | null | "2024-06-10T12:11:29Z" | Entry not found |
LinearizedLLM/llama-2-7b-medusa-head-local-linear | LinearizedLLM | "2024-06-10T12:12:46Z" | 0 | 0 | transformers | [
"transformers",
"endpoints_compatible",
"region:us"
] | null | "2024-06-10T12:12:18Z" | Entry not found |
vmattoso/my-wine-classification-first-model | vmattoso | "2024-06-10T12:13:47Z" | 0 | 0 | transformers | [
"transformers",
"joblib",
"endpoints_compatible",
"region:us"
] | null | "2024-06-10T12:13:46Z" | Entry not found |
astarel/llama3-8b-oig-unsloth-merged | astarel | "2024-06-10T12:22:12Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-06-10T12:15:20Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** astarel
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
SerhiiML/wav2vec2-large-mms-1b-turkish-colab | SerhiiML | "2024-06-19T09:39:09Z" | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-06-10T12:15:43Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
HyperdustProtocol/ImHyperAGI-llama2-7b-813 | HyperdustProtocol | "2024-06-10T12:15:57Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-2-7b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-10T12:15:49Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-2-7b-bnb-4bit
---
# Uploaded model
- **Developed by:** HyperdustProtocol
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-2-7b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
camenduru/t2v-turbo | camenduru | "2024-06-10T12:21:39Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-10T12:18:00Z" | Entry not found |
LinearizedLLM/llama-2-7b-local-linear | LinearizedLLM | "2024-06-10T12:24:10Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-06-10T12:19:09Z" | ---
license: llama2
language:
- en
--- |
jnalwa/customer_support_model | jnalwa | "2024-06-10T12:50:43Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-10T12:20:14Z" | Entry not found |
Justicescott/API-Code | Justicescott | "2024-06-10T12:34:10Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-10T12:20:27Z" | Entry not found |
Dcoolno1/Dcool1 | Dcoolno1 | "2024-06-10T12:22:05Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2024-06-10T12:22:05Z" | ---
license: apache-2.0
---
|
astarel/llama3-8b-oig-unsloth | astarel | "2024-06-10T12:22:33Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-10T12:22:25Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** astarel
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ohadfel/whisper-tiny-q | ohadfel | "2024-06-13T07:48:37Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-06-10T12:23:12Z" | Entry not found |
vicky4s4s/openchat-8b | vicky4s4s | "2024-06-10T12:51:57Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"openchat",
"llama3",
"C-RLFT",
"conversational",
"arxiv:2309.11235",
"base_model:meta-llama/Meta-Llama-3-8B",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-06-10T12:24:26Z" | ---
license: llama3
base_model: meta-llama/Meta-Llama-3-8B
tags:
- openchat
- llama3
- C-RLFT
library_name: transformers
pipeline_tag: text-generation
---
<div align="center">
<img src="https://raw.githubusercontent.com/imoneoi/openchat/master/assets/logo_new.png" style="width: 65%">
<h1>Advancing Open-source Language Models with Mixed-Quality Data</h1>
</div>
<p align="center" style="margin-top: 0px;">
<a href="https://openchat.team">
<img src="https://github.com/alpayariyak/openchat/blob/master/assets/logo_nobg.png?raw=true" alt="OpenChat Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 10px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text" style=" margin-right: 5px;">Online Demo</span>
</a> |
<a href="https://github.com/imoneoi/openchat">
<img src="https://github.githubassets.com/assets/GitHub-Mark-ea2971cee799.png" alt="GitHub Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text" style=" margin-right: 5px;">GitHub</span>
</a> |
<a href="https://arxiv.org/pdf/2309.11235.pdf">
<img src="https://github.com/alpayariyak/openchat/blob/master/assets/arxiv-logomark-small-square-border.png?raw=true" alt="ArXiv Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text" style="margin-right: 5px;">Paper</span>
</a> |
<a href="https://discord.gg/pQjnXvNKHY">
<img src="https://cloud.githubusercontent.com/assets/6291467/26705903/96c2d66e-477c-11e7-9f4e-f3c0efe96c9a.png" alt="Discord Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text">Discord</span>
</a>
</p>
<p align="center" style="margin-top: 0px;">
<span class="link-text" style=" margin-right: 0px; font-size: 0.8em">Sponsored by RunPod</span>
<img src="https://styles.redditmedia.com/t5_6075m3/styles/profileIcon_71syco7c5lt81.png?width=256&height=256&frame=1&auto=webp&crop=256:256,smart&s=24bd3c71dc11edc5d4f88d0cbc1da72ed7ae1969" alt="RunPod Logo" style="width:30px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/>
</p>
<div style="background-color: white; padding: 0.7em; border-radius: 0.5em; color: black; display: flex; flex-direction: column; justify-content: center; text-align: center">
<a href="https://huggingface.co/openchat/openchat-3.5-0106" style="text-decoration: none; color: black;">
<span style="font-size: 1.7em; font-family: 'Helvetica'; letter-spacing: 0.1em; font-weight: bold; color: black;">Llama 3 Version: OPENCHAT</span><span style="font-size: 1.8em; font-family: 'Helvetica'; color: #3c72db; ">3.6</span>
<span style="font-size: 1.0em; font-family: 'Helvetica'; color: white; background-color: #90e0ef; vertical-align: top; border-radius: 6em; padding: 0.066em 0.4em; letter-spacing: 0.1em; font-weight: bold;">20240522</span>
<span style="font-size: 0.85em; font-family: 'Helvetica'; color: black;">
<br> 🏆 The Overall Best Performing Open-source 8B Model 🏆
<br> 🚀 Outperforms Llama-3-8B-Instruct and open-source finetunes/merges 🚀
</span>
</a>
</div>
<div style="display: flex; justify-content: center; align-items: center; width: 110%; margin-left: -5%;">
<img src="https://raw.githubusercontent.com/imoneoi/openchat/master/assets/benchmarks-openchat-3.6-20240522.svg" style="width: 100%; border-radius: 1em">
</div>
<div style="display: flex; justify-content: center; align-items: center">
<p>* Llama-3-Instruct often fails to follow the few-shot templates. See <a href="https://huggingface.co/openchat/openchat-3.6-8b-20240522/discussions/6">example</a>.</p>
</div>
<div align="center">
<h2> Usage </h2>
</div>
To use this model, we highly recommend installing the OpenChat package by following the [installation guide](https://github.com/imoneoi/openchat#installation) in our repository and using the OpenChat OpenAI-compatible API server by running the serving command from the table below. The server is optimized for high-throughput deployment using [vLLM](https://github.com/vllm-project/vllm) and can run on a consumer GPU with 24GB RAM. To enable tensor parallelism, append `--tensor-parallel-size N` to the serving command.
Once started, the server listens at `localhost:18888` for requests and is compatible with the [OpenAI ChatCompletion API specifications](https://platform.openai.com/docs/api-reference/chat). Please refer to the example request below for reference. Additionally, you can use the [OpenChat Web UI](https://github.com/imoneoi/openchat#web-ui) for a user-friendly experience.
If you want to deploy the server as an online service, you can use `--api-keys sk-KEY1 sk-KEY2 ...` to specify allowed API keys and `--disable-log-requests --disable-log-stats --log-file openchat.log` for logging only to a file. For security purposes, we recommend using an [HTTPS gateway](https://fastapi.tiangolo.com/es/deployment/concepts/#security-https) in front of the server.
| Model | Size | Context | Weights | Serving |
|-----------------------|------|---------|-------------------------------------------------------------------------|---------------------------------------------------------------------------------------|
| OpenChat-3.6-20240522 | 8B | 8192 | [Huggingface](https://huggingface.co/openchat/openchat-3.6-8b-20240522) | `python -m ochat.serving.openai_api_server --model openchat/openchat-3.6-8b-20240522` |
<details>
<summary>Example request (click to expand)</summary>
```bash
curl http://localhost:18888/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "openchat_3.6",
"messages": [{"role": "user", "content": "You are a large language model named OpenChat. Write a poem to describe yourself"}]
}'
```
</details>
### Conversation templates
💡 **Default Mode**: Best for coding, chat and general tasks.
It's a modified version of the Llama 3 Instruct template, the only difference is role names, which are either `GPT4 Correct User` or `GPT4 Correct Assistant`
```
<|start_header_id|>GPT4 Correct User<|end_header_id|>\n\nHello<|eot_id|><|start_header_id|>GPT4 Correct Assistant<|end_header_id|>\n\nHi<|eot_id|><|start_header_id|>GPT4 Correct User<|end_header_id|>\n\nHow are you today?<|eot_id|><|start_header_id|>GPT4 Correct Assistant<|end_header_id|>\n\n
```
⚠️ **Notice:** Remember to set `<|eot_id|>` as end of generation token.
The default template is also available as the integrated `tokenizer.chat_template`, which can be used instead of manually specifying the template:
```python
messages = [
{"role": "user", "content": "Hello"},
{"role": "assistant", "content": "Hi"},
{"role": "user", "content": "How are you today?"}
]
tokens = tokenizer.apply_chat_template(messages, add_generation_prompt=True)
```
## Inference using Transformers
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = "vicky4s4s/openchat-8b"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.bfloat16, device_map="auto")
messages = [
{"role": "user", "content": "Explain how large language models work in detail."},
]
input_ids = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt").to(model.device)
outputs = model.generate(input_ids,
do_sample=True,
temperature=0.5,
max_new_tokens=1024
)
response = outputs[0][input_ids.shape[-1]:]
print(tokenizer.decode(response, skip_special_tokens=True))
```
<div align="center">
<h2> Limitations </h2>
</div>
**Foundation Model Limitations**
Despite its advanced capabilities, OpenChat is still bound by the limitations inherent in its foundation models. These limitations may impact the model's performance in areas such as:
- Complex reasoning
- Mathematical and arithmetic tasks
- Programming and coding challenges
**Hallucination of Non-existent Information**
OpenChat may sometimes generate information that does not exist or is not accurate, also known as "hallucination". Users should be aware of this possibility and verify any critical information obtained from the model.
**Safety**
OpenChat may sometimes generate harmful, hate speech, biased responses, or answer unsafe questions. It's crucial to apply additional AI safety measures in use cases that require safe and moderated responses.
<div align="center">
<h2> 💌 Contact </h2>
</div>
We look forward to hearing from you and collaborating on this exciting project!
**Project Lead:**
- Guan Wang [imonenext at gmail dot com]
- [Alpay Ariyak](https://github.com/alpayariyak) [aariyak at wpi dot edu]
<div align="center">
<h2> Citation </h2>
</div>
```
@article{wang2023openchat,
title={OpenChat: Advancing Open-source Language Models with Mixed-Quality Data},
author={Wang, Guan and Cheng, Sijie and Zhan, Xianyuan and Li, Xiangang and Song, Sen and Liu, Yang},
journal={arXiv preprint arXiv:2309.11235},
year={2023}
}
``` |
aleoaaaa/mT5_multilingual_XLSum_finetuned_10_06_14_25 | aleoaaaa | "2024-06-10T12:25:05Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-10T12:25:05Z" | Entry not found |
tranthaihoa/mistral_evidence | tranthaihoa | "2024-06-10T12:25:44Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/mistral-7b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-10T12:25:24Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
base_model: unsloth/mistral-7b-bnb-4bit
---
# Uploaded model
- **Developed by:** tranthaihoa
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
silent666/Qwen-Qwen1.5-1.8B-1718022339 | silent666 | "2024-06-10T12:27:17Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-06-10T12:25:40Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
JiaxinGe/llama3_4_bit_hellaswag_3_shots_generated_data_anthropic_dataset | JiaxinGe | "2024-06-10T17:46:06Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-10T12:26:11Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** JiaxinGe
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Anhnh27042001/new_lora_model_llama3 | Anhnh27042001 | "2024-06-11T02:39:13Z" | 0 | 0 | null | [
"safetensors",
"text-generation",
"conversational",
"region:us"
] | text-generation | "2024-06-10T12:27:15Z" | ---
pipeline_tag: text-generation
--- |
Ilya-Nazimov/lct-ruElectra-large-ner | Ilya-Nazimov | "2024-06-10T12:27:30Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-10T12:27:30Z" | Entry not found |
silent666/Qwen-Qwen1.5-1.8B-1718022552 | silent666 | "2024-06-10T12:35:02Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-06-10T12:29:12Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
onizukal/Boya2_3Class_RMSprop_1e5_20Epoch_Beit-large-224_fold1 | onizukal | "2024-06-11T20:30:38Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"beit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/beit-large-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2024-06-10T12:29:18Z" | ---
license: apache-2.0
base_model: microsoft/beit-large-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: Boya2_3Class_RMSprop_1e5_20Epoch_Beit-large-224_fold1
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8548696844993141
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Boya2_3Class_RMSprop_1e5_20Epoch_Beit-large-224_fold1
This model is a fine-tuned version of [microsoft/beit-large-patch16-224](https://huggingface.co/microsoft/beit-large-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5790
- Accuracy: 0.8549
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3103 | 1.0 | 914 | 0.3588 | 0.8494 |
| 0.3671 | 2.0 | 1828 | 0.3382 | 0.8669 |
| 0.2679 | 3.0 | 2742 | 0.4568 | 0.8491 |
| 0.13 | 4.0 | 3656 | 0.7675 | 0.8595 |
| 0.0539 | 5.0 | 4570 | 1.0063 | 0.8543 |
| 0.0034 | 6.0 | 5484 | 1.3345 | 0.8543 |
| 0.001 | 7.0 | 6398 | 1.4146 | 0.8562 |
| 0.0013 | 8.0 | 7312 | 1.6343 | 0.8529 |
| 0.0023 | 9.0 | 8226 | 1.5956 | 0.8486 |
| 0.0001 | 10.0 | 9140 | 1.5790 | 0.8549 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.2
|
vishal324/fine_tuned_llama3_8b | vishal324 | "2024-06-10T12:30:43Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"trl",
"sft",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-06-10T12:30:34Z" | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
LogicalPanda/Test | LogicalPanda | "2024-06-10T12:30:46Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-10T12:30:46Z" | Entry not found |
hoverinc/gestalt2_test | hoverinc | "2024-06-10T12:41:46Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-10T12:31:55Z" | Entry not found |
Dumele/Viv-final | Dumele | "2024-06-10T12:31:57Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2024-06-10T12:31:57Z" | ---
license: apache-2.0
---
|
erikka-22/MuseSwallow | erikka-22 | "2024-06-10T12:32:14Z" | 0 | 0 | null | [
"license:cc-by-4.0",
"region:us"
] | null | "2024-06-10T12:32:14Z" | ---
license: cc-by-4.0
---
|
kajamo/model_16 | kajamo | "2024-06-10T14:18:06Z" | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:distilbert-base-uncased",
"license:apache-2.0",
"region:us"
] | null | "2024-06-10T12:32:15Z" | ---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: distilbert-base-uncased
model-index:
- name: model_16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model_16
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.6194
- eval_accuracy: 0.7624
- eval_precision: 0.7632
- eval_recall: 0.7624
- eval_f1: 0.7621
- eval_runtime: 42.8182
- eval_samples_per_second: 285.977
- eval_steps_per_second: 17.89
- epoch: 14.0
- step: 42868
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 20
### Framework versions
- PEFT 0.11.1
- Transformers 4.41.2
- Pytorch 2.1.2
- Datasets 2.19.2
- Tokenizers 0.19.1 |
M2XXX/whisper-id | M2XXX | "2024-06-10T14:19:30Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-06-10T12:32:35Z" | Entry not found |
LucasMscFGV/results | LucasMscFGV | "2024-06-10T12:35:22Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-10T12:35:22Z" | Entry not found |
jnalwa/customer_support_tokenizer | jnalwa | "2024-06-10T12:38:02Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-10T12:38:02Z" | Entry not found |
anesabdennebi/SecFalGEN-IDS | anesabdennebi | "2024-06-10T12:38:07Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-10T12:38:07Z" | Entry not found |
Souvikrad365/outputmodel | Souvikrad365 | "2024-06-10T12:39:23Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-10T12:39:23Z" | Entry not found |
tranthaihoa/gemma_context | tranthaihoa | "2024-06-10T12:40:51Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma",
"trl",
"en",
"base_model:unsloth/gemma-7b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-10T12:40:28Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- gemma
- trl
base_model: unsloth/gemma-7b-bnb-4bit
---
# Uploaded model
- **Developed by:** tranthaihoa
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-7b-bnb-4bit
This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ENSTA-U2IS/tutorial-models | ENSTA-U2IS | "2024-06-11T09:18:43Z" | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | "2024-06-10T12:40:47Z" | ---
license: mit
---
MNIST models trained for 75 epochs with no selection. |
odelz/hindi_fb1mms | odelz | "2024-06-13T05:35:58Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-06-10T12:42:24Z" | Entry not found |
shakun42/indic-bert-finetuned-squad1.1 | shakun42 | "2024-06-10T12:46:17Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-10T12:46:17Z" | Entry not found |
Reihaneh/wav2vec2_fy_common_voice_34 | Reihaneh | "2024-06-10T12:50:32Z" | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-06-10T12:50:31Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Sjjsn/Dj | Sjjsn | "2024-06-10T12:51:51Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-10T12:51:50Z" | Entry not found |
chaewoners/LisaofBlackpink | chaewoners | "2024-06-10T12:53:36Z" | 0 | 0 | null | [
"license:unknown",
"region:us"
] | null | "2024-06-10T12:53:16Z" | ---
license: unknown
---
|
arhamk/ppo-LunarLander-v2-2 | arhamk | "2024-06-10T14:02:26Z" | 0 | 0 | null | [
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] | reinforcement-learning | "2024-06-10T12:53:17Z" | ---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -144.04 +/- 92.67
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 5
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'arhamk/ppo-LunarLander-v2-2'
'batch_size': 512
'minibatch_size': 128}
```
|
tranthaihoa/llama3_context | tranthaihoa | "2024-06-10T12:53:49Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-10T12:53:23Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** tranthaihoa
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mysharingorg/sharing_codebase | mysharingorg | "2024-06-10T17:00:23Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-10T12:53:39Z" | Entry not found |
sharifMunna/munna_bhai_mbbs_model_08_12_1 | sharifMunna | "2024-06-10T12:55:05Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-10T12:55:05Z" | Entry not found |
llmvetter/PixelCopter | llmvetter | "2024-06-10T13:24:54Z" | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | "2024-06-10T12:55:37Z" | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: PixelCopter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 27.50 +/- 27.08
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
izmuhammadra/jobseeker-falcon-7b | izmuhammadra | "2024-06-10T18:18:36Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2024-06-10T12:56:11Z" | ---
license: openrail
---
|
sharifMunna/munna_bhai_mbbs_model_12_12_1 | sharifMunna | "2024-06-10T12:58:50Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-10T12:58:50Z" | Entry not found |
loicloic/loic | loicloic | "2024-06-10T13:02:52Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-10T13:02:52Z" | Entry not found |
swiss-ai-center/giscup2023-deepLabV3Plus | swiss-ai-center | "2024-06-10T13:07:21Z" | 0 | 1 | keras | [
"keras",
"license:mit",
"region:us"
] | null | "2024-06-10T13:03:03Z" | ---
license: mit
---
|
techcto/solodev | techcto | "2024-06-10T13:03:44Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2024-06-10T13:03:44Z" | ---
license: apache-2.0
---
|
Propicto/t2p-t5-large-orfeo | Propicto | "2024-06-10T13:12:48Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | "2024-06-10T13:04:07Z" | ---
license: apache-2.0
---
|
badrabdullah/xls-r-300-cv17-bulgarian | badrabdullah | "2024-06-10T20:15:56Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice_17_0",
"base_model:facebook/wav2vec2-xls-r-300m",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-06-10T13:12:28Z" | ---
license: apache-2.0
base_model: facebook/wav2vec2-xls-r-300m
tags:
- generated_from_trainer
datasets:
- common_voice_17_0
metrics:
- wer
model-index:
- name: xls-r-300-cv17-bulgarian
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: common_voice_17_0
type: common_voice_17_0
config: bg
split: validation
args: bg
metrics:
- name: Wer
type: wer
value: 0.2967878948765596
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/badr-nlp/xlsr-continual-finetuning-polish/runs/snulovqw)
# xls-r-300-cv17-bulgarian
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice_17_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4329
- Wer: 0.2968
- Cer: 0.0726
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-------:|:----:|:---------------:|:------:|:------:|
| 4.0388 | 0.6579 | 100 | 4.1422 | 1.0 | 1.0 |
| 3.047 | 1.3158 | 200 | 3.0730 | 1.0 | 1.0 |
| 2.7349 | 1.9737 | 300 | 2.7601 | 0.9939 | 0.9946 |
| 0.6047 | 2.6316 | 400 | 0.6984 | 0.7954 | 0.1942 |
| 0.3868 | 3.2895 | 500 | 0.5550 | 0.5994 | 0.1519 |
| 0.3423 | 3.9474 | 600 | 0.4548 | 0.4804 | 0.1195 |
| 0.1942 | 4.6053 | 700 | 0.3973 | 0.4277 | 0.1034 |
| 0.1754 | 5.2632 | 800 | 0.4166 | 0.4391 | 0.1055 |
| 0.1734 | 5.9211 | 900 | 0.4146 | 0.4195 | 0.1018 |
| 0.1089 | 6.5789 | 1000 | 0.3859 | 0.3867 | 0.0937 |
| 0.233 | 7.2368 | 1100 | 0.4183 | 0.4054 | 0.1005 |
| 0.1519 | 7.8947 | 1200 | 0.4459 | 0.4151 | 0.1030 |
| 0.1176 | 8.5526 | 1300 | 0.4026 | 0.3845 | 0.0937 |
| 0.0997 | 9.2105 | 1400 | 0.3849 | 0.3590 | 0.0869 |
| 0.1266 | 9.8684 | 1500 | 0.4281 | 0.3781 | 0.0947 |
| 0.0945 | 10.5263 | 1600 | 0.4471 | 0.3983 | 0.0979 |
| 0.0575 | 11.1842 | 1700 | 0.4290 | 0.3660 | 0.0897 |
| 0.0854 | 11.8421 | 1800 | 0.4258 | 0.3749 | 0.0938 |
| 0.0558 | 12.5 | 1900 | 0.4242 | 0.3644 | 0.0907 |
| 0.0774 | 13.1579 | 2000 | 0.4339 | 0.3616 | 0.0888 |
| 0.0397 | 13.8158 | 2100 | 0.4155 | 0.3581 | 0.0882 |
| 0.0603 | 14.4737 | 2200 | 0.4681 | 0.3737 | 0.0943 |
| 0.0723 | 15.1316 | 2300 | 0.4446 | 0.3560 | 0.0875 |
| 0.0746 | 15.7895 | 2400 | 0.4430 | 0.3573 | 0.0889 |
| 0.0727 | 16.4474 | 2500 | 0.4549 | 0.3470 | 0.0870 |
| 0.0458 | 17.1053 | 2600 | 0.4581 | 0.3520 | 0.0873 |
| 0.0694 | 17.7632 | 2700 | 0.4414 | 0.3575 | 0.0896 |
| 0.0462 | 18.4211 | 2800 | 0.4235 | 0.3261 | 0.0802 |
| 0.0539 | 19.0789 | 2900 | 0.4496 | 0.3329 | 0.0810 |
| 0.0368 | 19.7368 | 3000 | 0.4043 | 0.3406 | 0.0846 |
| 0.0347 | 20.3947 | 3100 | 0.4367 | 0.3225 | 0.0789 |
| 0.019 | 21.0526 | 3200 | 0.4487 | 0.3272 | 0.0801 |
| 0.0361 | 21.7105 | 3300 | 0.4272 | 0.3241 | 0.0785 |
| 0.0475 | 22.3684 | 3400 | 0.4324 | 0.3191 | 0.0781 |
| 0.0341 | 23.0263 | 3500 | 0.4564 | 0.3398 | 0.0847 |
| 0.0454 | 23.6842 | 3600 | 0.4415 | 0.3188 | 0.0789 |
| 0.0346 | 24.3421 | 3700 | 0.4187 | 0.3072 | 0.0751 |
| 0.1315 | 25.0 | 3800 | 0.4480 | 0.3124 | 0.0765 |
| 0.0663 | 25.6579 | 3900 | 0.4488 | 0.3151 | 0.0779 |
| 0.0225 | 26.3158 | 4000 | 0.4372 | 0.3006 | 0.0739 |
| 0.0382 | 26.9737 | 4100 | 0.4164 | 0.2987 | 0.0730 |
| 0.0194 | 27.6316 | 4200 | 0.4190 | 0.2942 | 0.0718 |
| 0.0101 | 28.2895 | 4300 | 0.4328 | 0.2960 | 0.0726 |
| 0.0224 | 28.9474 | 4400 | 0.4302 | 0.2944 | 0.0720 |
| 0.0174 | 29.6053 | 4500 | 0.4329 | 0.2968 | 0.0726 |
### Framework versions
- Transformers 4.42.0.dev0
- Pytorch 2.3.1+cu121
- Datasets 2.19.2
- Tokenizers 0.19.1
|
SpeechResearch/wtimit-base-960h-whisper-reduced-learning-rate-all | SpeechResearch | "2024-06-10T13:12:35Z" | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-06-10T13:12:34Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
LuziaMoura/2024-05-31-QeA-MMGD-unsloth_mistral_7b_bnb_4bit | LuziaMoura | "2024-06-10T13:13:02Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/mistral-7b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-10T13:12:38Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
base_model: unsloth/mistral-7b-bnb-4bit
---
# Uploaded model
- **Developed by:** LuziaMoura
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
tranthaihoa/llama2_context | tranthaihoa | "2024-06-10T13:13:18Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-2-7b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-10T13:12:46Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-2-7b-bnb-4bit
---
# Uploaded model
- **Developed by:** tranthaihoa
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-2-7b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
teste001/Narrando_Paulo | teste001 | "2024-06-10T13:14:08Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2024-06-10T13:13:35Z" | ---
license: openrail
---
|
b1zk1t1337/Garfield_Lorenzo_Music | b1zk1t1337 | "2024-06-10T13:16:30Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-10T13:14:18Z" | Entry not found |
Propicto/t2p-nllb-200-distilled-600M-orfeo | Propicto | "2024-06-10T13:22:20Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2024-06-10T13:14:33Z" | ---
license: apache-2.0
---
|
manbeast3b/KinoInferLord3 | manbeast3b | "2024-06-10T13:15:15Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-10T13:15:08Z" | Entry not found |
onizukal/Boya3_3Class_RMSprop_1e5_20Epoch_Beit-large-224_fold1 | onizukal | "2024-06-11T20:22:10Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"beit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/beit-large-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2024-06-10T13:17:05Z" | ---
license: apache-2.0
base_model: microsoft/beit-large-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: Boya3_3Class_RMSprop_1e5_20Epoch_Beit-large-224_fold1
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8495434696308058
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Boya3_3Class_RMSprop_1e5_20Epoch_Beit-large-224_fold1
This model is a fine-tuned version of [microsoft/beit-large-patch16-224](https://huggingface.co/microsoft/beit-large-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5872
- Accuracy: 0.8495
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3849 | 1.0 | 632 | 0.3967 | 0.8273 |
| 0.3194 | 2.0 | 1264 | 0.4043 | 0.8372 |
| 0.2199 | 3.0 | 1896 | 0.4423 | 0.8503 |
| 0.1532 | 4.0 | 2528 | 0.6718 | 0.8444 |
| 0.0267 | 5.0 | 3160 | 0.9647 | 0.8416 |
| 0.0853 | 6.0 | 3792 | 1.2277 | 0.8428 |
| 0.0213 | 7.0 | 4424 | 1.4343 | 0.8491 |
| 0.0008 | 8.0 | 5056 | 1.4458 | 0.8495 |
| 0.0035 | 9.0 | 5688 | 1.5300 | 0.8495 |
| 0.0003 | 10.0 | 6320 | 1.5872 | 0.8495 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.2
|
mndousse/naruto-lora | mndousse | "2024-06-10T13:18:07Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-10T13:18:07Z" | Entry not found |
JiaxinGe/llama3_4_bit_hellaswag_3_shots_transformed_data_anthropic_dataset | JiaxinGe | "2024-06-10T18:31:06Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-10T13:18:13Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** JiaxinGe
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
onizukal/Boya1_3Class_Adamax_1e4_20Epoch_Beit-large-224_fold1 | onizukal | "2024-06-12T19:05:49Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"beit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/beit-large-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2024-06-10T13:19:18Z" | ---
license: apache-2.0
base_model: microsoft/beit-large-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: Boya1_3Class_Adamax_1e4_20Epoch_Beit-large-224_fold1
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8414336139017106
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Boya1_3Class_Adamax_1e4_20Epoch_Beit-large-224_fold1
This model is a fine-tuned version of [microsoft/beit-large-patch16-224](https://huggingface.co/microsoft/beit-large-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1930
- Accuracy: 0.8414
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3482 | 1.0 | 924 | 0.4193 | 0.8262 |
| 0.3157 | 2.0 | 1848 | 0.4359 | 0.8352 |
| 0.1507 | 3.0 | 2772 | 0.6032 | 0.8403 |
| 0.1694 | 4.0 | 3696 | 0.9383 | 0.8414 |
| 0.0111 | 5.0 | 4620 | 1.1930 | 0.8414 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.2
|
bakanaims/falcon-7b-AG-News | bakanaims | "2024-06-10T13:21:02Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:tiiuae/falcon-7b",
"license:apache-2.0",
"region:us"
] | null | "2024-06-10T13:20:45Z" | ---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: tiiuae/falcon-7b
metrics:
- accuracy
model-index:
- name: falcon-7b-AG-News
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# falcon-7b-AG-News
This model is a fine-tuned version of [tiiuae/falcon-7b](https://huggingface.co/tiiuae/falcon-7b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4483
- Balanced Accuracy: 0.8911
- Accuracy: 0.8867
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Balanced Accuracy | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:-----------------:|:--------:|
| 1.7257 | 1.0 | 25 | 1.2132 | 0.5713 | 0.47 |
| 0.8197 | 2.0 | 50 | 0.5488 | 0.8580 | 0.8367 |
| 0.2867 | 3.0 | 75 | 0.4392 | 0.8726 | 0.86 |
| 0.104 | 4.0 | 100 | 0.5123 | 0.8912 | 0.8833 |
| 0.0393 | 5.0 | 125 | 0.4483 | 0.8911 | 0.8867 |
### Framework versions
- PEFT 0.11.1
- Transformers 4.40.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1 |
Mirman619/phoenix_offset | Mirman619 | "2024-06-10T13:22:36Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2024-06-10T13:21:04Z" | ---
license: openrail
---
|
alexgrigore/videomae-base-finetuned-good-gestureUnitV12 | alexgrigore | "2024-06-10T13:29:57Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"videomae",
"video-classification",
"generated_from_trainer",
"base_model:MCG-NJU/videomae-base",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | video-classification | "2024-06-10T13:22:06Z" | ---
license: cc-by-nc-4.0
base_model: MCG-NJU/videomae-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: videomae-base-finetuned-good-gestureUnitV12
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-base-finetuned-good-gestureUnitV12
This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Accuracy: 0.8966
- Loss: 0.2937
- Accuracy Gunit: 0.8333
- Accuracy Nothing: 0.9556
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 160
### Training results
| Training Loss | Epoch | Step | Accuracy | Validation Loss | Accuracy Gunit | Accuracy Nothing |
|:-------------:|:------:|:----:|:--------:|:---------------:|:--------------:|:----------------:|
| 0.8549 | 0.1062 | 17 | 0.5714 | 0.6774 | 0.68 | 0.4839 |
| 0.6437 | 1.1062 | 34 | 0.4643 | 0.7254 | 1.0 | 0.0323 |
| 0.6226 | 2.1063 | 51 | 0.6071 | 0.6527 | 0.96 | 0.3226 |
| 0.5883 | 3.1063 | 68 | 0.5714 | 0.6389 | 1.0 | 0.2258 |
| 0.5136 | 4.1063 | 85 | 0.6964 | 0.5330 | 0.84 | 0.5806 |
| 0.4284 | 5.1063 | 102 | 0.8214 | 0.4506 | 0.84 | 0.8065 |
| 0.3474 | 6.1063 | 119 | 0.8214 | 0.3974 | 0.76 | 0.8710 |
| 0.2859 | 7.1063 | 136 | 0.8214 | 0.3822 | 0.64 | 0.9677 |
| 0.3059 | 8.1062 | 153 | 0.8393 | 0.3763 | 0.68 | 0.9677 |
| 0.2582 | 9.0437 | 160 | 0.8393 | 0.3738 | 0.68 | 0.9677 |
### Framework versions
- Transformers 4.41.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
Nick20241/1 | Nick20241 | "2024-06-10T13:23:24Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2024-06-10T13:23:24Z" | ---
license: apache-2.0
---
|
iloncka/exp_5_old_bg_raw-subs_1_v_5_convnext_nano.in12k_ft_in1k_ep_60 | iloncka | "2024-06-10T13:26:57Z" | 0 | 0 | fastai | [
"fastai",
"region:us"
] | null | "2024-06-10T13:25:41Z" | ---
tags:
- fastai
---
# Amazing!
🥳 Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)!
Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card.
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
vmattoso/my-wine-classification-first-model_v2 | vmattoso | "2024-06-10T13:25:49Z" | 0 | 0 | sklearn | [
"sklearn",
"joblib",
"skops",
"tabular-classification",
"license:mit",
"region:us"
] | tabular-classification | "2024-06-10T13:25:47Z" | ---
license: mit
library_name: sklearn
tags:
- sklearn
- skops
- tabular-classification
model_format: pickle
model_file: model_main_v2_hf.joblib
widget:
- structuredData:
alcohol:
- 10.8
- 9.6
- 11.7
chlorides:
- 0.171
- 0.095
- 0.063
citric acid:
- 0.43
- 0.0
- 0.33
density:
- 0.9982
- 0.99854
- 0.99516
fixed acidity:
- 10.8
- 8.1
- 9.1
free sulfur dioxide:
- 27.0
- 5.0
- 13.0
pH:
- 3.17
- 3.36
- 3.26
residual sugar:
- 2.1
- 4.1
- 2.05
sulphates:
- 0.76
- 0.53
- 0.84
total sulfur dioxide:
- 66.0
- 14.0
- 27.0
volatile acidity:
- 0.47
- 0.82
- 0.29
---
# Model description
This is the best model
## Intended uses & limitations
[More Information Needed]
## Training Procedure
[More Information Needed]
### Hyperparameters
<details>
<summary> Click to expand </summary>
| Hyperparameter | Value |
|--------------------------|---------|
| bootstrap | True |
| ccp_alpha | 0.0 |
| class_weight | |
| criterion | gini |
| max_depth | |
| max_features | sqrt |
| max_leaf_nodes | |
| max_samples | |
| min_impurity_decrease | 0.0 |
| min_samples_leaf | 1 |
| min_samples_split | 2 |
| min_weight_fraction_leaf | 0.0 |
| monotonic_cst | |
| n_estimators | 100 |
| n_jobs | |
| oob_score | False |
| random_state | 0 |
| verbose | 0 |
| warm_start | False |
</details>
### Model Plot
<style>#sk-container-id-1 {/* Definition of color scheme common for light and dark mode */--sklearn-color-text: black;--sklearn-color-line: gray;/* Definition of color scheme for unfitted estimators */--sklearn-color-unfitted-level-0: #fff5e6;--sklearn-color-unfitted-level-1: #f6e4d2;--sklearn-color-unfitted-level-2: #ffe0b3;--sklearn-color-unfitted-level-3: chocolate;/* Definition of color scheme for fitted estimators */--sklearn-color-fitted-level-0: #f0f8ff;--sklearn-color-fitted-level-1: #d4ebff;--sklearn-color-fitted-level-2: #b3dbfd;--sklearn-color-fitted-level-3: cornflowerblue;/* Specific color for light theme */--sklearn-color-text-on-default-background: var(--sg-text-color, var(--theme-code-foreground, var(--jp-content-font-color1, black)));--sklearn-color-background: var(--sg-background-color, var(--theme-background, var(--jp-layout-color0, white)));--sklearn-color-border-box: var(--sg-text-color, var(--theme-code-foreground, var(--jp-content-font-color1, black)));--sklearn-color-icon: #696969;@media (prefers-color-scheme: dark) {/* Redefinition of color scheme for dark theme */--sklearn-color-text-on-default-background: var(--sg-text-color, var(--theme-code-foreground, var(--jp-content-font-color1, white)));--sklearn-color-background: var(--sg-background-color, var(--theme-background, var(--jp-layout-color0, #111)));--sklearn-color-border-box: var(--sg-text-color, var(--theme-code-foreground, var(--jp-content-font-color1, white)));--sklearn-color-icon: #878787;}
}#sk-container-id-1 {color: var(--sklearn-color-text);
}#sk-container-id-1 pre {padding: 0;
}#sk-container-id-1 input.sk-hidden--visually {border: 0;clip: rect(1px 1px 1px 1px);clip: rect(1px, 1px, 1px, 1px);height: 1px;margin: -1px;overflow: hidden;padding: 0;position: absolute;width: 1px;
}#sk-container-id-1 div.sk-dashed-wrapped {border: 1px dashed var(--sklearn-color-line);margin: 0 0.4em 0.5em 0.4em;box-sizing: border-box;padding-bottom: 0.4em;background-color: var(--sklearn-color-background);
}#sk-container-id-1 div.sk-container {/* jupyter's `normalize.less` sets `[hidden] { display: none; }`but bootstrap.min.css set `[hidden] { display: none !important; }`so we also need the `!important` here to be able to override thedefault hidden behavior on the sphinx rendered scikit-learn.org.See: https://github.com/scikit-learn/scikit-learn/issues/21755 */display: inline-block !important;position: relative;
}#sk-container-id-1 div.sk-text-repr-fallback {display: none;
}div.sk-parallel-item,
div.sk-serial,
div.sk-item {/* draw centered vertical line to link estimators */background-image: linear-gradient(var(--sklearn-color-text-on-default-background), var(--sklearn-color-text-on-default-background));background-size: 2px 100%;background-repeat: no-repeat;background-position: center center;
}/* Parallel-specific style estimator block */#sk-container-id-1 div.sk-parallel-item::after {content: "";width: 100%;border-bottom: 2px solid var(--sklearn-color-text-on-default-background);flex-grow: 1;
}#sk-container-id-1 div.sk-parallel {display: flex;align-items: stretch;justify-content: center;background-color: var(--sklearn-color-background);position: relative;
}#sk-container-id-1 div.sk-parallel-item {display: flex;flex-direction: column;
}#sk-container-id-1 div.sk-parallel-item:first-child::after {align-self: flex-end;width: 50%;
}#sk-container-id-1 div.sk-parallel-item:last-child::after {align-self: flex-start;width: 50%;
}#sk-container-id-1 div.sk-parallel-item:only-child::after {width: 0;
}/* Serial-specific style estimator block */#sk-container-id-1 div.sk-serial {display: flex;flex-direction: column;align-items: center;background-color: var(--sklearn-color-background);padding-right: 1em;padding-left: 1em;
}/* Toggleable style: style used for estimator/Pipeline/ColumnTransformer box that is
clickable and can be expanded/collapsed.
- Pipeline and ColumnTransformer use this feature and define the default style
- Estimators will overwrite some part of the style using the `sk-estimator` class
*//* Pipeline and ColumnTransformer style (default) */#sk-container-id-1 div.sk-toggleable {/* Default theme specific background. It is overwritten whether we have aspecific estimator or a Pipeline/ColumnTransformer */background-color: var(--sklearn-color-background);
}/* Toggleable label */
#sk-container-id-1 label.sk-toggleable__label {cursor: pointer;display: block;width: 100%;margin-bottom: 0;padding: 0.5em;box-sizing: border-box;text-align: center;
}#sk-container-id-1 label.sk-toggleable__label-arrow:before {/* Arrow on the left of the label */content: "▸";float: left;margin-right: 0.25em;color: var(--sklearn-color-icon);
}#sk-container-id-1 label.sk-toggleable__label-arrow:hover:before {color: var(--sklearn-color-text);
}/* Toggleable content - dropdown */#sk-container-id-1 div.sk-toggleable__content {max-height: 0;max-width: 0;overflow: hidden;text-align: left;/* unfitted */background-color: var(--sklearn-color-unfitted-level-0);
}#sk-container-id-1 div.sk-toggleable__content.fitted {/* fitted */background-color: var(--sklearn-color-fitted-level-0);
}#sk-container-id-1 div.sk-toggleable__content pre {margin: 0.2em;border-radius: 0.25em;color: var(--sklearn-color-text);/* unfitted */background-color: var(--sklearn-color-unfitted-level-0);
}#sk-container-id-1 div.sk-toggleable__content.fitted pre {/* unfitted */background-color: var(--sklearn-color-fitted-level-0);
}#sk-container-id-1 input.sk-toggleable__control:checked~div.sk-toggleable__content {/* Expand drop-down */max-height: 200px;max-width: 100%;overflow: auto;
}#sk-container-id-1 input.sk-toggleable__control:checked~label.sk-toggleable__label-arrow:before {content: "▾";
}/* Pipeline/ColumnTransformer-specific style */#sk-container-id-1 div.sk-label input.sk-toggleable__control:checked~label.sk-toggleable__label {color: var(--sklearn-color-text);background-color: var(--sklearn-color-unfitted-level-2);
}#sk-container-id-1 div.sk-label.fitted input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: var(--sklearn-color-fitted-level-2);
}/* Estimator-specific style *//* Colorize estimator box */
#sk-container-id-1 div.sk-estimator input.sk-toggleable__control:checked~label.sk-toggleable__label {/* unfitted */background-color: var(--sklearn-color-unfitted-level-2);
}#sk-container-id-1 div.sk-estimator.fitted input.sk-toggleable__control:checked~label.sk-toggleable__label {/* fitted */background-color: var(--sklearn-color-fitted-level-2);
}#sk-container-id-1 div.sk-label label.sk-toggleable__label,
#sk-container-id-1 div.sk-label label {/* The background is the default theme color */color: var(--sklearn-color-text-on-default-background);
}/* On hover, darken the color of the background */
#sk-container-id-1 div.sk-label:hover label.sk-toggleable__label {color: var(--sklearn-color-text);background-color: var(--sklearn-color-unfitted-level-2);
}/* Label box, darken color on hover, fitted */
#sk-container-id-1 div.sk-label.fitted:hover label.sk-toggleable__label.fitted {color: var(--sklearn-color-text);background-color: var(--sklearn-color-fitted-level-2);
}/* Estimator label */#sk-container-id-1 div.sk-label label {font-family: monospace;font-weight: bold;display: inline-block;line-height: 1.2em;
}#sk-container-id-1 div.sk-label-container {text-align: center;
}/* Estimator-specific */
#sk-container-id-1 div.sk-estimator {font-family: monospace;border: 1px dotted var(--sklearn-color-border-box);border-radius: 0.25em;box-sizing: border-box;margin-bottom: 0.5em;/* unfitted */background-color: var(--sklearn-color-unfitted-level-0);
}#sk-container-id-1 div.sk-estimator.fitted {/* fitted */background-color: var(--sklearn-color-fitted-level-0);
}/* on hover */
#sk-container-id-1 div.sk-estimator:hover {/* unfitted */background-color: var(--sklearn-color-unfitted-level-2);
}#sk-container-id-1 div.sk-estimator.fitted:hover {/* fitted */background-color: var(--sklearn-color-fitted-level-2);
}/* Specification for estimator info (e.g. "i" and "?") *//* Common style for "i" and "?" */.sk-estimator-doc-link,
a:link.sk-estimator-doc-link,
a:visited.sk-estimator-doc-link {float: right;font-size: smaller;line-height: 1em;font-family: monospace;background-color: var(--sklearn-color-background);border-radius: 1em;height: 1em;width: 1em;text-decoration: none !important;margin-left: 1ex;/* unfitted */border: var(--sklearn-color-unfitted-level-1) 1pt solid;color: var(--sklearn-color-unfitted-level-1);
}.sk-estimator-doc-link.fitted,
a:link.sk-estimator-doc-link.fitted,
a:visited.sk-estimator-doc-link.fitted {/* fitted */border: var(--sklearn-color-fitted-level-1) 1pt solid;color: var(--sklearn-color-fitted-level-1);
}/* On hover */
div.sk-estimator:hover .sk-estimator-doc-link:hover,
.sk-estimator-doc-link:hover,
div.sk-label-container:hover .sk-estimator-doc-link:hover,
.sk-estimator-doc-link:hover {/* unfitted */background-color: var(--sklearn-color-unfitted-level-3);color: var(--sklearn-color-background);text-decoration: none;
}div.sk-estimator.fitted:hover .sk-estimator-doc-link.fitted:hover,
.sk-estimator-doc-link.fitted:hover,
div.sk-label-container:hover .sk-estimator-doc-link.fitted:hover,
.sk-estimator-doc-link.fitted:hover {/* fitted */background-color: var(--sklearn-color-fitted-level-3);color: var(--sklearn-color-background);text-decoration: none;
}/* Span, style for the box shown on hovering the info icon */
.sk-estimator-doc-link span {display: none;z-index: 9999;position: relative;font-weight: normal;right: .2ex;padding: .5ex;margin: .5ex;width: min-content;min-width: 20ex;max-width: 50ex;color: var(--sklearn-color-text);box-shadow: 2pt 2pt 4pt #999;/* unfitted */background: var(--sklearn-color-unfitted-level-0);border: .5pt solid var(--sklearn-color-unfitted-level-3);
}.sk-estimator-doc-link.fitted span {/* fitted */background: var(--sklearn-color-fitted-level-0);border: var(--sklearn-color-fitted-level-3);
}.sk-estimator-doc-link:hover span {display: block;
}/* "?"-specific style due to the `<a>` HTML tag */#sk-container-id-1 a.estimator_doc_link {float: right;font-size: 1rem;line-height: 1em;font-family: monospace;background-color: var(--sklearn-color-background);border-radius: 1rem;height: 1rem;width: 1rem;text-decoration: none;/* unfitted */color: var(--sklearn-color-unfitted-level-1);border: var(--sklearn-color-unfitted-level-1) 1pt solid;
}#sk-container-id-1 a.estimator_doc_link.fitted {/* fitted */border: var(--sklearn-color-fitted-level-1) 1pt solid;color: var(--sklearn-color-fitted-level-1);
}/* On hover */
#sk-container-id-1 a.estimator_doc_link:hover {/* unfitted */background-color: var(--sklearn-color-unfitted-level-3);color: var(--sklearn-color-background);text-decoration: none;
}#sk-container-id-1 a.estimator_doc_link.fitted:hover {/* fitted */background-color: var(--sklearn-color-fitted-level-3);
}
</style><div id="sk-container-id-1" class="sk-top-container" style="overflow: auto;"><div class="sk-text-repr-fallback"><pre>RandomForestClassifier(random_state=0)</pre><b>In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook. <br />On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.</b></div><div class="sk-container" hidden><div class="sk-item"><div class="sk-estimator fitted sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-1" type="checkbox" checked><label for="sk-estimator-id-1" class="sk-toggleable__label fitted sk-toggleable__label-arrow fitted"> RandomForestClassifier<a class="sk-estimator-doc-link fitted" rel="noreferrer" target="_blank" href="https://scikit-learn.org/1.4/modules/generated/sklearn.ensemble.RandomForestClassifier.html">?<span>Documentation for RandomForestClassifier</span></a><span class="sk-estimator-doc-link fitted">i<span>Fitted</span></span></label><div class="sk-toggleable__content fitted"><pre>RandomForestClassifier(random_state=0)</pre></div> </div></div></div></div>
## Evaluation Results
| Metric | Value |
|----------|---------|
| accuracy | 0.7125 |
# How to Get Started with the Model
[More Information Needed]
# Model Card Authors
This model card is written by following authors:
[More Information Needed]
# Model Card Contact
You can contact the model card authors through following channels:
[More Information Needed]
# Citation
Below you can find information related to citation.
**BibTeX:**
```
[More Information Needed]
```
# citation_bibtex
bibtex
@inproceedings{...,year={2020}}
# get_started_code
import pickle
with open(dtc_pkl_filename, 'rb') as file:
clf = pickle.load(file)
# model_card_authors
skops_user
# limitations
This model is not ready to be used in production.
# model_description
This is a RandomForest Model model trained on wine classification dataset.
# confusion_matrix
![confusion_matrix](confusion_matrix.png)
|
silent666/Qwen-Qwen1.5-7B-1718026036 | silent666 | "2024-06-10T13:49:56Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-06-10T13:27:17Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Propicto/t2p-nllb-200-distilled-600M-commonvoice | Propicto | "2024-06-10T13:40:35Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2024-06-10T13:28:32Z" | ---
license: apache-2.0
---
|