modelId
stringlengths 5
122
| author
stringlengths 2
42
| last_modified
unknown | downloads
int64 0
738M
| likes
int64 0
11k
| library_name
stringclasses 245
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 48
values | createdAt
unknown | card
stringlengths 1
901k
|
---|---|---|---|---|---|---|---|---|---|
cxfajar197/iqbal-fine-tuned-bert | cxfajar197 | "2024-06-22T23:49:46Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-22T23:49:46Z" | Entry not found |
shuyuej/MedLLaMA3-70B-Multilingual | shuyuej | "2024-07-01T19:27:17Z" | 0 | 0 | null | [
"safetensors",
"license:apache-2.0",
"region:us"
] | null | "2024-06-22T23:50:04Z" | ---
license: apache-2.0
---
|
mmarone/quip_20200301.25-1.bf | mmarone | "2024-06-23T00:08:28Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-22T23:52:37Z" | Entry not found |
SebasCortazar/example_model | SebasCortazar | "2024-06-22T23:53:40Z" | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | "2024-06-22T23:53:40Z" | ---
license: mit
---
|
vjnadkarni/roberta-base-peft-ag-news | vjnadkarni | "2024-06-23T16:24:10Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-06-22T23:54:36Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
vjnadkarni/roberta-base-fully-fine-tuned-ag-news | vjnadkarni | "2024-06-23T16:24:25Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-06-22T23:54:37Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
tjake/TinyLlama-1.1B-Chat-v1.0-Jlama-Q4 | tjake | "2024-06-22T23:56:14Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-06-22T23:54:51Z" | Entry not found |
Tflatval/Whisper-large-BB-NB-RUND | Tflatval | "2024-06-23T00:09:07Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-06-22T23:57:33Z" | Entry not found |
ace-20/Ejemplo | ace-20 | "2024-06-22T23:59:07Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2024-06-22T23:59:07Z" | ---
license: apache-2.0
---
|
PhillipGuo/hp-lat-llama-No_PCA-epsilon1.0-pgd_layer9-rmu_layer12-wikitext-RMU-151 | PhillipGuo | "2024-06-23T04:15:07Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-06-23T00:00:00Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
arielmartin/test | arielmartin | "2024-06-23T00:06:16Z" | 0 | 0 | null | [
"license:unknown",
"region:us"
] | null | "2024-06-23T00:06:16Z" | ---
license: unknown
---
|
William2357/dogfirst1 | William2357 | "2024-06-23T00:08:41Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-23T00:08:41Z" | Entry not found |
Tflatval/Whisper-large-NB | Tflatval | "2024-06-23T00:15:40Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-06-23T00:09:08Z" | Entry not found |
Musix/Hector-Bonilla-Chaves-BR | Musix | "2024-06-23T00:12:58Z" | 0 | 1 | null | [
"region:us"
] | null | "2024-06-23T00:12:14Z" | Entry not found |
Tflatval/Whisper-small-BB-NB | Tflatval | "2024-06-23T00:22:00Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-06-23T00:20:25Z" | Entry not found |
Tflatval/Whisper-small_BB-RUND | Tflatval | "2024-06-23T00:24:54Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-06-23T00:23:09Z" | Entry not found |
Pyroserenus/L3-8B-Stheno-v3.3-32K-6.9bpw-h8-exl2 | Pyroserenus | "2024-06-24T04:25:10Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"en",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"exl2",
"region:us"
] | text-generation | "2024-06-23T00:25:22Z" | ---
license: cc-by-nc-4.0
language:
- en
---
Trained with compute from [Backyard.ai](https://backyard.ai/) | Thanks to them and @dynafire for helping me out.
---
## Exl2 quant done of [Stheno 3.3 32k by Sao10k](https://huggingface.co/Sao10K/L3-8B-Stheno-v3.3-32K)
Training Details:
<br>Trained at 8K Context -> Expanded to 32K Context with PoSE training.
Dataset Modifications:
<br>\- Further Cleaned up Roleplaying Samples -> Quality Check
<br>\- Removed Low Quality Samples from Manual Check -> Increased Baseline Quality Floor
<br>\- More Creative Writing Samples -> 2x Samples
<br>\- Remade and Refined Detailed Instruct Data
Notes:
<br>\- Training run is much less aggressive than previous Stheno versions.
<br>\- This model works when tested in bf16 with the same configs as within the file.
<br>\- I do not know the effects quantisation has on it.
<br>\- Roleplays pretty well. Feels nice in my opinion.
<br>\- It has some issues on long context understanding and reasoning. Much better vs rope scaling normally though, so that is a plus.
<br>\- Reminder, this isn't a native 32K model. It has it's issues, but it's coherent and working well.
Sanity Check // Needle in a Haystack Results:
<br>\- This is not as complex as RULER or NIAN, but it's a basic evaluator. Some improper train examples had Haystack scores ranging from Red to Orange for most of the extended contexts.
![Results](https://huggingface.co/Sao10K/L3-8B-Stheno-v3.3-32K/resolve/main/haystack.png)
Wandb Run:
![Wandb](https://huggingface.co/Sao10K/L3-8B-Stheno-v3.3-32K/resolve/main/wandb.png)
---
Relevant Axolotl Configurations:
<br>-> Taken from [winglian/Llama-3-8b-64k-PoSE](https://huggingface.co/winglian/Llama-3-8b-64k-PoSE)
<br>\- I tried to find my own configs, hours of tinkering but the one he used worked best, so I stuck to it.
<br>\- 2M Rope Theta had the best loss results during training compared to other values.
<br>\- Leaving it at 500K rope wasn't that much worse, but 4M and 8M Theta made the grad_norm values worsen even if loss drops fast.
<br>\- Mixing in Pretraining Data was a PITA. Made it a lot worse with formatting.
<br>\- Pretraining / Noise made it worse at Haystack too? It wasn't all Green, Mainly Oranges.
<br>\- Improper / Bad Rope Theta shows in Grad_Norm exploding to thousands. It'll drop to low values alright, but it's a scary fast drop even with gradient clipping.
```
sequence_len: 8192
use_pose: true
pose_max_context_len: 32768
overrides_of_model_config:
rope_theta: 2000000.0
max_position_embeddings: 32768
# peft_use_dora: true
adapter: lora
peft_use_rslora: true
lora_model_dir:
lora_r: 256
lora_alpha: 256
lora_dropout: 0.1
lora_target_linear: true
lora_target_modules:
- gate_proj
- down_proj
- up_proj
- q_proj
- v_proj
- k_proj
- o_proj
warmup_steps: 80
gradient_accumulation_steps: 6
micro_batch_size: 1
num_epochs: 2
optimizer: adamw_bnb_8bit
lr_scheduler: cosine_with_min_lr
learning_rate: 0.00004
lr_scheduler_kwargs:
min_lr: 0.000004
``` |
Tflatval/Whisper-small-NB-RUND | Tflatval | "2024-06-23T00:29:33Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-06-23T00:27:26Z" | Entry not found |
quirky-lats-at-mats/wmdp_cyber_lat_5 | quirky-lats-at-mats | "2024-06-23T00:33:45Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-06-23T00:33:38Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
TheRealheavy/InDaClubDrums | TheRealheavy | "2024-06-23T00:37:48Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2024-06-23T00:37:09Z" | ---
license: openrail
---
|
William2357/personfirst | William2357 | "2024-06-23T00:40:33Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-23T00:40:33Z" | Entry not found |
tsavage68/Summary_L3_200steps_1e6rate_03beta_CSFTDPO | tsavage68 | "2024-06-23T03:24:46Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"dpo",
"generated_from_trainer",
"conversational",
"base_model:tsavage68/Summary_L3_1000steps_1e7rate_SFT2",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-06-23T00:45:29Z" | ---
license: llama3
base_model: tsavage68/Summary_L3_1000steps_1e7rate_SFT2
tags:
- trl
- dpo
- generated_from_trainer
model-index:
- name: Summary_L3_200steps_1e6rate_03beta_CSFTDPO
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Summary_L3_200steps_1e6rate_03beta_CSFTDPO
This model is a fine-tuned version of [tsavage68/Summary_L3_1000steps_1e7rate_SFT2](https://huggingface.co/tsavage68/Summary_L3_1000steps_1e7rate_SFT2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5961
- Rewards/chosen: 0.1275
- Rewards/rejected: -2.3694
- Rewards/accuracies: 0.1400
- Rewards/margins: 2.4969
- Logps/rejected: -20.0026
- Logps/chosen: -9.1277
- Logits/rejected: -1.1104
- Logits/chosen: -1.1118
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.555 | 0.2004 | 50 | 0.5962 | 0.0976 | -1.3577 | 0.1400 | 1.4553 | -17.9791 | -9.1876 | -1.0985 | -1.1002 |
| 0.6585 | 0.4008 | 100 | 0.5962 | 0.1094 | -1.5231 | 0.1400 | 1.6326 | -18.3100 | -9.1639 | -1.1003 | -1.1019 |
| 0.6238 | 0.6012 | 150 | 0.5961 | 0.1393 | -2.3520 | 0.1400 | 2.4913 | -19.9677 | -9.1042 | -1.1096 | -1.1111 |
| 0.6065 | 0.8016 | 200 | 0.5961 | 0.1275 | -2.3694 | 0.1400 | 2.4969 | -20.0026 | -9.1277 | -1.1104 | -1.1118 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.0.0+cu117
- Datasets 2.20.0
- Tokenizers 0.19.1
|
gyeethaha/pout | gyeethaha | "2024-06-23T00:52:30Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-23T00:51:30Z" | Entry not found |
William2357/doggofirst | William2357 | "2024-06-23T00:53:13Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-23T00:53:13Z" | Entry not found |
mjjj7/MJAgressiveVocals | mjjj7 | "2024-06-23T00:54:11Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2024-06-23T00:53:31Z" | ---
license: openrail
---
|
chYassine/AMAN-7b-v01 | chYassine | "2024-06-30T23:54:40Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"dataset:chYassine/WebAttack-CVSSMetrics",
"base_model:mistralai/Mistral-7B-Instruct-v0.3",
"license:apache-2.0",
"region:us"
] | null | "2024-06-23T01:07:53Z" | ---
base_model: mistralai/Mistral-7B-Instruct-v0.3
datasets:
- chYassine/WebAttack-CVSSMetrics
library_name: peft
license: apache-2.0
---
## Model Details
**Model Description**
Developed by: Yassine Chagna & Ahmed Ouidani
Model type: Cyberattack Detection
Language(s) (NLP): English
License: apache 2.0
Finetuned from model: mistralai/Mistral-7B-Instruct-v0.3
### Uses
**Direct Use**
This model can be directly used for monitoring web access logs to detect potential cyber attacks.
**Downstream Use**
The model can be adapted for other types of logs and cyberattack detection scenarios.
**Out-of-Scope Use**
This model is not designed for general-purpose NLP tasks unrelated to cyberattack detection.
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases, and limitations of the model. More information is needed for further recommendations.
### Summary
### Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type: T4 GPU
### Technical Specifications [optional]
**Model Architecture and Objective**
The model uses the architecture of mistralai/Mistral-7B-Instruct-v0.3, fine-tuned for cyberattack detection on web access logs.
**Compute Infrastructure**
Hardware: Single T4 GPU
Software: PEFT 0.11.2.dev0
### Framework versions
PEFT 0.11.2.dev0 |
cephasyeo/huggingfacemodel | cephasyeo | "2024-06-23T01:09:31Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-23T01:09:31Z" | Entry not found |
Zvin/Zvingaling | Zvin | "2024-06-23T01:10:22Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-23T01:10:22Z" | Entry not found |
rizwanaslam/model | rizwanaslam | "2024-06-23T01:14:50Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-23T01:14:26Z" | ---
base_model: unsloth/llama-3-8b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** rizwanaslam
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Eitezaz/Ironman | Eitezaz | "2024-06-23T01:17:14Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2024-06-23T01:17:14Z" | ---
license: apache-2.0
---
|
Alsebay/TestSMP-v0.1-alt | Alsebay | "2024-06-23T01:24:12Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2311.03099",
"arxiv:2306.01708",
"base_model:MaziyarPanahi/Llama-3-8B-Instruct-v0.9",
"base_model:Sao10K/L3-8B-Stheno-v3.2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-06-23T01:19:47Z" | ---
base_model:
- MaziyarPanahi/Llama-3-8B-Instruct-v0.9
- Sao10K/L3-8B-Stheno-v3.2
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [MaziyarPanahi/Llama-3-8B-Instruct-v0.9](https://huggingface.co/MaziyarPanahi/Llama-3-8B-Instruct-v0.9) as a base.
### Models Merged
The following models were included in the merge:
* [Sao10K/L3-8B-Stheno-v3.2](https://huggingface.co/Sao10K/L3-8B-Stheno-v3.2)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- layer_range: [0, 16]
model: MaziyarPanahi/Llama-3-8B-Instruct-v0.9
parameters:
density: 0.4
weight: 1.0
- layer_range: [0, 16]
model: Sao10K/L3-8B-Stheno-v3.2
parameters:
density: 0.6
weight: 0.9
- sources:
- layer_range: [16, 32]
model: MaziyarPanahi/Llama-3-8B-Instruct-v0.9
parameters:
density: 0.2
weight: 0.8
- layer_range: [16, 32]
model: Sao10K/L3-8B-Stheno-v3.2
parameters:
density: 0.8
weight: 1.0
merge_method: dare_ties
base_model: MaziyarPanahi/Llama-3-8B-Instruct-v0.9
parameters:
int8_mask: true
dtype: bfloat16
```
|
LarryAIDraw/colorburstXL_ponyV10 | LarryAIDraw | "2024-06-23T07:20:18Z" | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2024-06-23T01:29:23Z" | ---
license: creativeml-openrail-m
---
https://civitai.com/models/341261/colorburst-xl?modelVersionId=384017 |
morgana-rodrigues/sales-bot-llama3 | morgana-rodrigues | "2024-06-23T01:32:56Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-23T01:32:56Z" | Entry not found |
cminja/whisper-tiny-sr-yodas | cminja | "2024-06-24T14:12:09Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-06-23T01:36:50Z" | Entry not found |
yong-tang/cspace | yong-tang | "2024-07-01T20:30:25Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-23T01:39:40Z" | Entry not found |
DeBiaoFan/segformer-b0-finetuned-segments-sidewalk-2 | DeBiaoFan | "2024-06-23T01:42:26Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-23T01:42:26Z" | Entry not found |
KiFramesAI/SauvegardeDonnees | KiFramesAI | "2024-06-23T01:42:27Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-23T01:42:27Z" | Entry not found |
MomoD2R/LLAMA3_Lang_Vuln_Code_Detection | MomoD2R | "2024-06-23T01:49:44Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-23T01:49:30Z" | ---
base_model: unsloth/llama-3-8b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** MomoD2R
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Wawaworker/swrvskdx | Wawaworker | "2024-06-23T02:28:33Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-23T01:54:10Z" | Entry not found |
DBangshu/Base_gemma_e5_1_2 | DBangshu | "2024-06-23T01:59:51Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-06-23T01:57:41Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
hiroki-rad/bert_base-japanese-v3-wrime-lora-fp16 | hiroki-rad | "2024-06-23T01:58:06Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-06-23T01:58:04Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
usuario101/zerogerman | usuario101 | "2024-06-23T02:03:19Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-23T02:03:18Z" | German DeepPhonemizer autoregressive transformer based g2p model, intended to be used in zerovox
https://github.com/gooofy/zerovox
based on the Zamia Speech phoneset.
---
license: apache-2.0
---
|
EnduGalletaa/tashicomm | EnduGalletaa | "2024-06-23T02:04:55Z" | 0 | 0 | null | [
"license:artistic-2.0",
"region:us"
] | null | "2024-06-23T02:04:36Z" | ---
license: artistic-2.0
---
|
mmarone/portraits-sketch-stackv2.50-50.bf | mmarone | "2024-06-23T02:32:43Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-23T02:04:56Z" | Entry not found |
Mistermango24/Toonify-PonyXL | Mistermango24 | "2024-06-23T02:27:20Z" | 0 | 0 | null | [
"license:artistic-2.0",
"region:us"
] | null | "2024-06-23T02:07:33Z" | ---
license: artistic-2.0
---
|
karthikmit/openai-whisper-medium-LORA-EN-NQ-v2 | karthikmit | "2024-06-23T02:07:50Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-06-23T02:07:41Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
usuario101/zerohifi_gan | usuario101 | "2024-06-23T02:13:19Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-23T02:13:19Z" | pretrained VCTK hifigan models from
HiFi-GAN: Generative Adversarial Networks for Efficient and High Fidelity Speech Synthesis
by Jungil Kong, Jaehyeon Kim, Jaekyoung Bae
https://github.com/jik876/hifi-gan
---
license: mit
---
|
dfndr11/Llama-2-7b-Intel-Basic-Finetune | dfndr11 | "2024-06-23T02:27:11Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"optimum_habana",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-hf",
"region:us"
] | null | "2024-06-23T02:13:31Z" | ---
base_model: meta-llama/Llama-2-7b-hf
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.2
|
nihil117/GrayI_v0.5 | nihil117 | "2024-06-23T02:19:48Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/gemma-2b-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-06-23T02:15:47Z" | ---
base_model: unsloth/gemma-2b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- gemma
- trl
---
# Uploaded model
- **Developed by:** nihil117
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-2b-bnb-4bit
This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
LoneWolfVPS/stylizedMODEL | LoneWolfVPS | "2024-06-23T02:40:22Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-23T02:17:12Z" | Entry not found |
OmarAmir2001/Visioneer-v4-4 | OmarAmir2001 | "2024-06-23T02:19:24Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-23T02:19:24Z" | Entry not found |
oron1208/kivolotolosulu_tuvalai | oron1208 | "2024-06-23T06:58:26Z" | 0 | 1 | null | [
"text-to-image",
"ja",
"license:other",
"region:us"
] | text-to-image | "2024-06-23T02:21:28Z" | ---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- ja
pipeline_tag: text-to-image
---
<style>
body {
display: flex;
align-items: center;
justify-content: center;
height: 100vh;
margin: 0;
font-family: Arial, sans-serif;
background-color: #f4f4f9;
overflow: auto;
}
.container {
display: flex;
flex-direction: column;
align-items: center;
justify-content: center;
width: 100%;
padding: 20px;
}
.title-container {
display: flex;
flex-direction: column;
justify-content: center;
align-items: center;
padding: 1em;
border-radius: 10px;
}
.title {
font-size: 3em;
font-family: 'Montserrat', sans-serif;
text-align: center;
font-weight: bold;
}
.title span {
background: -webkit-linear-gradient(45deg, #D08FC0, #D2165B, #E49E1F);
-webkit-background-clip: text;
-webkit-text-fill-color: transparent;
}
.gallery {
display: grid;
grid-template-columns: repeat(3, 1fr);
gap: 10px;
}
.gallery img {
width: 100%;
height: auto;
margin-top: 0px;
margin-bottom: 0px;
border-radius: 10px;
box-shadow: 0 4px 8px rgba(0, 0, 0, 0.2);
transition: transform 0.3s;
}
.gallery img:hover {
transform: scale(1.05);
}
.note {
font-size: 1em;
opacity: 50%;
text-align: center;
margin-top: 20px;
color: #555;
}
</style>
<div class="container">
<div class="title-container">
<div class="title"><span>きぼぉとぉすぅ_つゔぁい</span></div>
</div>
<div class="gallery">
<img src="https://huggingface.co/oron1208/kivolotolosulu_tuvalai/resolve/main/img/sample_01.png" alt="Image 1">
<img src="https://huggingface.co/oron1208/kivolotolosulu_tuvalai/resolve/main/img/sample_02.png" alt="Image 2">
<img src="https://huggingface.co/oron1208/kivolotolosulu_tuvalai/resolve/main/img/smaple_03.png" alt="Image 3">
<img src="https://huggingface.co/oron1208/kivolotolosulu_tuvalai/resolve/main/img/sample_04.png" alt="Image 4">
<img src="https://huggingface.co/oron1208/kivolotolosulu_tuvalai/resolve/main/img/sample_05.png" alt="Image 5">
<img src="https://huggingface.co/oron1208/kivolotolosulu_tuvalai/resolve/main/img/sample_06.png" alt="Image 6">
</div>
<div class="note">
</div>
</div>
# あばぁうと もでる
## もでるねぇーむ:きぼぉとぉすぅ_つゔぁい
- このぉ、もでるわぁ。つぎのもでるぅぉー、ふぁいんちゅーにんぐしたもでるぅだよ。
1. Kivotos-XL-2.0<br>https://huggingface.co/yodayo-ai/kivotos-xl-2.0
# らいせんすLICENSE
このモデルは Fair AI Public License 1.0-SD ライセンス にしたがっています。これは Stable Diffusion モデルのライセンス とも りようかのうです。
かんじんなポイントは:
<p>へんしゅうのきょうゆう: もしこのモデルをへんこうしたら、そのへんこうと もとのライセンスはいっしょにしてね。</p>
<p>ソースコードのていきょう: もしへんこうされたバージョンがネットワークでアクセスできるなら、ほかのひとがソースコードをてにいれるほうほう(たとえばダウンロードリンク)を ていきょうしてね。これは、そこからさくせいされたもでるぅもおなじだよ。</p>
<p> はいぶんのじょうけん: どんなはいぶんもこのライセンスか、これ似たたきそくのある ほかのライセンスでやってね。</p>
げんぶん
- The model falls under [Fair AI Public License 1.0-SD license](https://freedevproject.org/faipl-1.0-sd/), which is compatible with [Stable Diffusion models’ license](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/blob/main/LICENSE.md). Key points:
1. Modification Sharing: If you modify the model, you must share both your changes and the original license.
2. Source Code Accessibility: If your modified version is network-accessible, provide a way (like a download link) for others to get the source code. This applies to derived models too.
3. Distribution Terms: Any distribution must be under this license or another with similar rules.
## とれーにんぐ Training
| Feature | FT |
|-------------------------------|----------------------------|
| **Hardware** | 4090 24GB PCIe |
| **Batch Size** | 12 |
| **Gradient Accumulation Steps** | 1 |
| **Noise Offset** | 0.035 |
| **Epochs** | 10 |
| **UNet Learning Rate** | 2.1e-6 |
| **Text Encoder Learning Rate** | 1.5e-7 |
| **Optimizer** | Adafactor |
| **Scheduler** | Adafactor |
| **image** | 6300image |
|
ymoslem/whisper-medium-ga2en-v5.3.2-19k-r | ymoslem | "2024-06-23T12:34:53Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"ga",
"en",
"dataset:ymoslem/IWSLT2023-GA-EN",
"dataset:ymoslem/FLEURS-GA-EN",
"dataset:ymoslem/BitesizeIrish-GA-EN",
"dataset:ymoslem/SpokenWords-GA-EN-MTed",
"dataset:ymoslem/Tatoeba-Speech-Irish",
"dataset:ymoslem/Wikimedia-Speech-Irish",
"base_model:openai/whisper-medium",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-06-23T02:22:06Z" | ---
language:
- ga
- en
license: apache-2.0
base_model: openai/whisper-medium
tags:
- generated_from_trainer
datasets:
- ymoslem/IWSLT2023-GA-EN
- ymoslem/FLEURS-GA-EN
- ymoslem/BitesizeIrish-GA-EN
- ymoslem/SpokenWords-GA-EN-MTed
- ymoslem/Tatoeba-Speech-Irish
- ymoslem/Wikimedia-Speech-Irish
metrics:
- bleu
- wer
model-index:
- name: Whisper Medium GA-EN Speech Translation, 1 epoch, 10k steps
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: IWSLT-2023, FLEURS, BiteSize, SpokenWords, Tatoeba, and Wikimedia
type: ymoslem/IWSLT2023-GA-EN
metrics:
- name: Bleu
type: bleu
value: 31.69
- name: Wer
type: wer
value: 64.970733903647
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Medium GA-EN Speech Translation, 1 epoch, 10k steps
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the IWSLT-2023, FLEURS, BiteSize, SpokenWords, Tatoeba, and Wikimedia dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5548
- Bleu: 31.69
- Chrf: 50.38
- Wer: 64.9707
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 19000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Bleu | Chrf | Validation Loss | Wer |
|:-------------:|:------:|:-----:|:-----:|:-----:|:---------------:|:--------:|
| 2.4382 | 0.0109 | 100 | 3.07 | 16.85 | 2.1114 | 171.0491 |
| 2.6151 | 0.0219 | 200 | 6.25 | 23.02 | 2.0207 | 126.9698 |
| 2.5699 | 0.0328 | 300 | 5.71 | 24.03 | 1.8660 | 155.5606 |
| 2.3084 | 0.0438 | 400 | 9.87 | 28.45 | 1.8084 | 129.0860 |
| 2.3327 | 0.0547 | 500 | 12.01 | 31.92 | 1.7823 | 102.7915 |
| 2.1495 | 0.0657 | 600 | 13.97 | 32.4 | 1.7238 | 98.6042 |
| 2.2164 | 0.0766 | 700 | 11.21 | 33.19 | 1.6538 | 146.0153 |
| 2.0071 | 0.0876 | 800 | 14.34 | 35.72 | 1.7038 | 96.9383 |
| 1.8334 | 0.0985 | 900 | 16.51 | 37.23 | 1.6329 | 96.8032 |
| 1.8359 | 0.1095 | 1000 | 17.87 | 35.94 | 1.6637 | 84.4665 |
| 1.7703 | 0.1204 | 1100 | 19.54 | 39.02 | 1.5626 | 79.7839 |
| 1.5805 | 0.1314 | 1200 | 20.19 | 40.4 | 1.5618 | 77.8028 |
| 1.4545 | 0.1423 | 1300 | 13.88 | 35.53 | 1.5599 | 112.5619 |
| 1.5177 | 0.1533 | 1400 | 18.79 | 40.11 | 1.4880 | 84.6916 |
| 1.6335 | 0.1642 | 1500 | 16.41 | 38.64 | 1.4996 | 96.9833 |
| 1.3809 | 0.1752 | 1600 | 18.3 | 40.17 | 1.4739 | 101.8910 |
| 1.2694 | 0.1861 | 1700 | 22.53 | 43.15 | 1.4498 | 76.9923 |
| 1.2321 | 0.1970 | 1800 | 19.92 | 42.59 | 1.4163 | 84.6015 |
| 1.1969 | 0.2080 | 1900 | 21.63 | 44.92 | 1.4137 | 85.3670 |
| 1.2023 | 0.2189 | 2000 | 20.42 | 41.57 | 1.3530 | 82.8906 |
| 1.1676 | 0.2299 | 2100 | 22.82 | 44.23 | 1.3723 | 78.1180 |
| 1.0332 | 0.2408 | 2200 | 26.73 | 44.75 | 1.3641 | 70.2386 |
| 0.8589 | 0.2518 | 2300 | 26.94 | 46.89 | 1.3344 | 72.7600 |
| 0.9829 | 0.2627 | 2400 | 28.15 | 47.21 | 1.3181 | 69.1130 |
| 0.8228 | 0.2737 | 2500 | 26.98 | 47.41 | 1.3049 | 74.0207 |
| 0.7667 | 0.2846 | 2600 | 30.0 | 49.42 | 1.2698 | 65.1058 |
| 0.8749 | 0.2956 | 2700 | 27.91 | 47.67 | 1.2878 | 66.9518 |
| 0.7504 | 0.3065 | 2800 | 32.03 | 50.35 | 1.2670 | 63.6650 |
| 0.7069 | 0.3175 | 2900 | 30.7 | 49.53 | 1.2771 | 64.4304 |
| 0.7199 | 0.3284 | 3000 | 30.21 | 48.93 | 1.2658 | 65.5561 |
| 0.6207 | 0.3394 | 3100 | 30.82 | 49.11 | 1.2687 | 66.0063 |
| 0.5995 | 0.3503 | 3200 | 31.99 | 50.94 | 1.2207 | 62.9446 |
| 0.6294 | 0.3612 | 3300 | 31.05 | 50.85 | 1.2422 | 64.7006 |
| 0.4612 | 0.3722 | 3400 | 33.1 | 51.82 | 1.2203 | 61.9090 |
| 0.5138 | 0.3831 | 3500 | 32.08 | 51.86 | 1.2007 | 63.0797 |
| 0.5059 | 0.3941 | 3600 | 31.8 | 51.19 | 1.2130 | 63.9352 |
| 0.417 | 0.4050 | 3700 | 32.45 | 51.41 | 1.1975 | 62.2692 |
| 0.2958 | 0.4160 | 3800 | 29.29 | 51.39 | 1.2046 | 62.7645 |
| 0.393 | 0.4269 | 3900 | 28.95 | 51.45 | 1.1968 | 63.1697 |
| 0.3858 | 0.4379 | 4000 | 29.54 | 51.58 | 1.1929 | 62.4043 |
| 0.5416 | 0.4488 | 4100 | 27.29 | 43.94 | 1.3522 | 67.9424 |
| 0.6644 | 0.4598 | 4200 | 23.16 | 44.45 | 1.4191 | 77.3976 |
| 0.5246 | 0.4707 | 4300 | 22.26 | 44.91 | 1.4221 | 77.2625 |
| 0.614 | 0.4817 | 4400 | 26.9 | 46.15 | 1.3956 | 70.4638 |
| 0.5973 | 0.4926 | 4500 | 25.55 | 45.51 | 1.4152 | 76.7222 |
| 0.544 | 0.5036 | 4600 | 23.54 | 47.87 | 1.4091 | 79.1085 |
| 0.5975 | 0.5145 | 4700 | 21.85 | 42.69 | 1.4644 | 78.5682 |
| 0.4675 | 0.5255 | 4800 | 22.93 | 43.69 | 1.4598 | 76.9023 |
| 0.7959 | 0.5364 | 4900 | 24.91 | 44.98 | 1.3884 | 74.5610 |
| 0.5936 | 0.5473 | 5000 | 26.91 | 44.88 | 1.4235 | 69.0680 |
| 0.4631 | 0.5583 | 5100 | 25.77 | 45.81 | 1.4002 | 74.0207 |
| 0.5188 | 0.5692 | 5200 | 28.37 | 45.48 | 1.4405 | 66.2765 |
| 0.4675 | 0.5802 | 5300 | 21.1 | 43.11 | 1.4045 | 92.1207 |
| 0.4214 | 0.5911 | 5400 | 25.62 | 44.82 | 1.4250 | 72.2197 |
| 0.4592 | 0.6021 | 5500 | 27.24 | 46.44 | 1.4107 | 70.0585 |
| 0.4809 | 0.6130 | 5600 | 27.93 | 47.42 | 1.3896 | 69.5182 |
| 0.4364 | 0.6240 | 5700 | 25.84 | 47.47 | 1.3808 | 77.6227 |
| 0.3333 | 0.6349 | 5800 | 26.46 | 47.08 | 1.4203 | 72.4899 |
| 0.3345 | 0.6459 | 5900 | 23.1 | 44.6 | 1.4763 | 81.2247 |
| 0.3368 | 0.6568 | 6000 | 24.55 | 45.76 | 1.4182 | 80.5493 |
| 0.3061 | 0.6678 | 6100 | 23.1 | 45.97 | 1.4218 | 81.3597 |
| 0.324 | 0.6787 | 6200 | 28.26 | 47.06 | 1.4453 | 67.5822 |
| 0.2667 | 0.6897 | 6300 | 27.87 | 46.14 | 1.4494 | 69.0230 |
| 0.2845 | 0.7006 | 6400 | 26.39 | 46.72 | 1.4448 | 71.4543 |
| 0.3125 | 0.7115 | 6500 | 27.81 | 46.45 | 1.4643 | 70.0135 |
| 0.264 | 0.7225 | 6600 | 26.27 | 47.75 | 1.4244 | 72.7600 |
| 0.2426 | 0.7334 | 6700 | 25.84 | 46.68 | 1.4081 | 76.4070 |
| 0.2174 | 0.7444 | 6800 | 30.67 | 47.92 | 1.4036 | 65.8262 |
| 0.2265 | 0.7553 | 6900 | 28.11 | 49.12 | 1.4174 | 71.2292 |
| 0.2016 | 0.7663 | 7000 | 30.43 | 49.47 | 1.4341 | 65.9163 |
| 0.1865 | 0.7772 | 7100 | 32.05 | 49.5 | 1.3690 | 63.1697 |
| 0.2148 | 0.7882 | 7200 | 32.29 | 49.91 | 1.3603 | 63.8901 |
| 0.2126 | 0.7991 | 7300 | 32.07 | 49.31 | 1.4046 | 63.6650 |
| 0.1594 | 0.8101 | 7400 | 29.94 | 47.48 | 1.4122 | 65.5110 |
| 0.1295 | 0.8210 | 7500 | 30.14 | 49.79 | 1.4243 | 65.7812 |
| 0.1378 | 0.8320 | 7600 | 31.23 | 49.42 | 1.4334 | 65.9613 |
| 0.1701 | 0.8429 | 7700 | 31.04 | 49.95 | 1.4149 | 65.6461 |
| 0.1102 | 0.8539 | 7800 | 31.37 | 50.2 | 1.4082 | 63.7100 |
| 0.1267 | 0.8648 | 7900 | 32.86 | 50.83 | 1.3642 | 60.8285 |
| 0.1384 | 0.8758 | 8000 | 33.47 | 49.61 | 1.3860 | 59.8829 |
| 0.1128 | 0.8867 | 8100 | 32.78 | 50.04 | 1.3840 | 61.8190 |
| 0.1197 | 0.8976 | 8200 | 33.69 | 50.94 | 1.3641 | 61.8190 |
| 0.1181 | 0.9086 | 8300 | 32.0 | 49.65 | 1.3913 | 63.5299 |
| 0.0866 | 0.9195 | 8400 | 30.39 | 48.48 | 1.4171 | 68.0324 |
| 0.0784 | 0.9305 | 8500 | 32.27 | 49.32 | 1.3850 | 63.3949 |
| 0.092 | 0.9414 | 8600 | 33.78 | 51.13 | 1.3880 | 61.2787 |
| 0.0685 | 0.9524 | 8700 | 34.33 | 51.23 | 1.3876 | 61.1887 |
| 0.0783 | 0.9633 | 8800 | 33.4 | 48.9 | 1.4010 | 62.5844 |
| 0.0735 | 0.9743 | 8900 | 33.72 | 49.01 | 1.4035 | 61.5038 |
| 0.0875 | 0.9852 | 9000 | 30.44 | 49.06 | 1.4064 | 67.5371 |
| 0.0822 | 0.9962 | 9100 | 34.64 | 51.51 | 1.3803 | 60.5133 |
| 0.041 | 1.0071 | 9200 | 34.66 | 52.06 | 1.3678 | 59.4327 |
| 0.0351 | 1.0181 | 9300 | 33.88 | 51.16 | 1.3739 | 61.3688 |
| 0.0368 | 1.0290 | 9400 | 35.2 | 51.73 | 1.3846 | 60.4232 |
| 0.035 | 1.0400 | 9500 | 34.23 | 51.32 | 1.3753 | 60.8735 |
| 0.0277 | 1.0509 | 9600 | 35.0 | 52.59 | 1.3788 | 60.0180 |
| 0.0247 | 1.0619 | 9700 | 34.69 | 51.7 | 1.3914 | 60.2882 |
| 0.0321 | 1.0728 | 9800 | 34.63 | 51.91 | 1.3804 | 60.6033 |
| 0.0286 | 1.0837 | 9900 | 33.92 | 51.64 | 1.3795 | 61.8640 |
| 0.0239 | 1.0947 | 10000 | 33.79 | 51.67 | 1.3818 | 61.6839 |
| 0.085 | 1.1056 | 10100 | 1.5082| 26.54 | 46.14 | 70.9140 |
| 0.1002 | 1.1166 | 10200 | 1.5156| 31.06 | 47.27 | 64.7006 |
| 0.1144 | 1.1275 | 10300 | 1.5837| 24.93 | 44.33 | 73.3003 |
| 0.1137 | 1.1385 | 10400 | 1.5372| 28.96 | 47.2 | 65.7812 |
| 0.1182 | 1.1494 | 10500 | 1.5366| 30.05 | 47.09 | 65.6461 |
| 0.1214 | 1.1604 | 10600 | 1.5160| 26.83 | 46.73 | 70.2386 |
| 0.1413 | 1.1713 | 10700 | 1.5384| 27.92 | 47.04 | 70.3287 |
| 0.1011 | 1.1823 | 10800 | 1.5791| 27.71 | 46.13 | 70.5538 |
| 0.1187 | 1.1932 | 10900 | 1.6188| 22.91 | 44.41 | 81.8100 |
| 0.1364 | 1.2042 | 11000 | 1.5807| 29.38 | 45.46 | 67.3570 |
| 0.1158 | 1.2151 | 11100 | 1.5819| 25.33 | 44.25 | 76.0919 |
| 0.1199 | 1.2261 | 11200 | 1.5727| 27.52 | 46.0 | 68.3926 |
| 0.1213 | 1.2370 | 11300 | 1.5728| 26.92 | 45.92 | 69.4732 |
| 0.1291 | 1.2479 | 11400 | 1.5743| 28.13 | 44.72 | 67.9874 |
| 0.131 | 1.2589 | 11500 | 1.5337| 29.42 | 48.65 | 66.6367 |
| 0.1279 | 1.2698 | 11600 | 1.6752| 22.78 | 43.45 | 79.2436 |
| 0.116 | 1.2808 | 11700 | 1.6056| 26.91 | 45.46 | 70.4638 |
| 0.1126 | 1.2917 | 11800 | 1.6341| 26.11 | 45.71 | 70.5988 |
| 0.1263 | 1.3027 | 11900 | 1.6231| 28.31 | 46.34 | 69.6983 |
| 0.1072 | 1.3136 | 12000 | 1.5580| 30.51 | 47.66 | 65.5110 |
| 0.115 | 1.3246 | 12100 | 1.5944| 28.13 | 46.39 | 68.2575 |
| 0.1014 | 1.3355 | 12200 | 1.5486| 28.75 | 47.01 | 67.9874 |
| 0.1149 | 1.3465 | 12300 | 1.5973| 29.4 | 46.21 | 67.4021 |
| 0.1131 | 1.3574 | 12400 | 1.5769| 29.94 | 48.17 | 65.1959 |
| 0.1032 | 1.3684 | 12500 | 1.6363| 25.02 | 47.46 | 78.4331 |
| 0.1103 | 1.3793 | 12600 | 1.6057| 28.2 | 45.82 | 68.7528 |
| 0.109 | 1.3903 | 12700 | 1.5884| 28.0 | 45.95 | 69.1130 |
| 0.0927 | 1.4012 | 12800 | 1.5881| 29.86 | 47.67 | 67.5371 |
| 0.0829 | 1.4122 | 12900 | 1.5855| 29.15 | 45.76 | 67.6722 |
| 0.0955 | 1.4231 | 13000 | 1.6313| 27.59 | 46.3 | 69.7884 |
| 0.0874 | 1.4340 | 13100 | 1.6173| 30.52 | 45.99 | 64.5205 |
| 0.0816 | 1.4450 | 13200 | 1.5864| 30.89 | 47.34 | 65.1959 |
| 0.0836 | 1.4559 | 13300 | 1.6319| 30.18 | 47.3 | 65.5110 |
| 0.0832 | 1.4669 | 13400 | 1.6353| 28.85 | 47.8 | 67.3570 |
| 0.0622 | 1.4778 | 13500 | 1.6117| 28.91 | 46.78 | 69.0680 |
| 0.0689 | 1.4888 | 13600 | 1.5919| 31.32 | 47.68 | 64.7006 |
| 0.0921 | 1.4997 | 13700 | 1.6180| 30.09 | 46.8 | 66.8167 |
| 0.0754 | 1.5107 | 13800 | 1.5755| 30.77 | 47.55 | 64.4755 |
| 0.0844 | 1.5216 | 13900 | 1.5681| 31.12 | 48.9 | 64.9707 |
| 0.0696 | 1.5326 | 14000 | 1.5481| 31.27 | 49.63 | 64.0252 |
| 0.0914 | 1.5435 | 14100 | 1.5603| 29.61 | 47.28 | 65.8262 |
| 0.0789 | 1.5545 | 14200 | 1.5896| 31.85 | 48.49 | 62.8546 |
| 0.0572 | 1.5654 | 14300 | 1.5931| 27.55 | 46.87 | 69.1580 |
| 0.0619 | 1.5764 | 14400 | 1.5988| 28.82 | 46.89 | 66.4566 |
| 0.0566 | 1.5873 | 14500 | 1.5838| 29.97 | 48.6 | 66.6817 |
| 0.0661 | 1.5982 | 14600 | 1.6447| 30.47 | 46.25 | 65.5561 |
| 0.0607 | 1.6092 | 14700 | 1.5621| 32.31 | 48.53 | 63.1247 |
| 0.0566 | 1.6201 | 14800 | 1.5838| 31.08 | 49.25 | 66.3665 |
| 0.0354 | 1.6311 | 14900 | 1.5723| 30.73 | 48.58 | 66.3215 |
| 0.0585 | 1.6420 | 15000 | 1.5825| 29.84 | 47.27 | 66.5466 |
| 0.0542 | 1.6530 | 15100 | 1.6012| 28.7 | 47.32 | 69.8784 |
| 0.0641 | 1.6639 | 15200 | 1.5662| 28.8 | 47.35 | 67.4021 |
| 0.0588 | 1.6749 | 15300 | 1.5596| 30.62 | 47.51 | 65.9613 |
| 0.0401 | 1.6858 | 15400 | 1.5719| 30.74 | 48.55 | 66.0513 |
| 0.043 | 1.6968 | 15500 | 1.5979| 29.37 | 47.1 | 69.3381 |
| 0.0384 | 1.7077 | 15600 | 1.5718| 29.56 | 48.58 | 68.7978 |
| 0.0399 | 1.7187 | 15700 | 1.5873| 30.84 | 47.72 | 66.9518 |
| 0.0437 | 1.7296 | 15800 | 1.5493| 31.43 | 48.06 | 65.7812 |
| 0.0473 | 1.7406 | 15900 | 1.5185| 31.4 | 49.44 | 64.1153 |
| 0.0477 | 1.7515 | 16000 | 1.5695| 32.89 | 49.91 | 62.0891 |
| 0.0435 | 1.7625 | 16100 | 1.5550| 32.99 | 50.86 | 62.0891 |
| 0.0478 | 1.7734 | 16200 | 1.5662| 31.31 | 49.58 | 63.5750 |
| 0.0337 | 1.7843 | 16300 | 1.5792| 31.22 | 49.44 | 63.8901 |
| 0.0387 | 1.7953 | 16400 | 1.5715| 32.2 | 49.5 | 62.3143 |
| 0.0376 | 1.8062 | 16500 | 1.5783| 31.95 | 49.34 | 64.0252 |
| 0.0357 | 1.8172 | 16600 | 1.5684| 30.67 | 49.15 | 65.8712 |
| 0.028 | 1.8281 | 16700 | 1.5544| 29.78 | 48.23 | 67.5822 |
| 0.0185 | 1.8391 | 16800 | 1.5419| 31.16 | 50.1 | 64.0252 |
| 0.0408 | 1.8500 | 16900 | 1.5504| 31.25 | 49.53 | 64.6556 |
| 0.0259 | 1.8610 | 17000 | 1.5501| 30.73 | 50.13 | 68.3026 |
| 0.0276 | 1.8719 | 17100 | 1.5359| 29.97 | 49.96 | 69.0680 |
| 0.0201 | 1.8829 | 17200 | 1.5504| 32.06 | 51.35 | 63.2598 |
| 0.0268 | 1.8938 | 17300 | 1.5322| 34.47 | 51.53 | 60.0180 |
| 0.023 | 1.9048 | 17400 | 1.5407| 34.65 | 52.44 | 59.5678 |
| 0.0138 | 1.9157 | 17500 | 1.5653| 33.97 | 51.65 | 61.5038 |
| 0.0164 | 1.9267 | 17600 | 1.5470| 32.62 | 50.82 | 64.6556 |
| 0.018 | 1.9376 | 17700 | 1.5450| 33.09 | 50.06 | 61.9541 |
| 0.0246 | 1.9485 | 17800 | 1.5473| 31.67 | 50.21 | 65.1058 |
| 0.0196 | 1.9595 | 17900 | 1.5414| 31.1 | 50.36 | 66.9068 |
| 0.0163 | 1.9704 | 18000 | 1.5453| 32.63 | 50.62 | 62.9446 |
| 0.018 | 1.9814 | 18100 | 1.5361| 32.99 | 50.91 | 61.2337 |
| 0.025 | 1.9923 | 18200 | 1.5394| 33.18 | 50.61 | 61.2787 |
| 0.0081 | 2.0033 | 18300 | 1.5481| 33.39 | 50.63 | 61.1436 |
| 0.0147 | 2.0142 | 18400 | 1.5500| 33.39 | 51.22 | 61.3688 |
| 0.0121 | 2.0252 | 18500 | 1.5515| 32.35 | 50.74 | 62.0441 |
| 0.0154 | 2.0361 | 18600 | 1.5524| 31.18 | 50.35 | 66.0964 |
| 0.0157 | 2.0471 | 18700 | 1.5585| 31.68 | 50.44 | 65.4660 |
| 0.0089 | 2.0580 | 18800 | 1.5559| 33.33 | 50.68 | 61.2337 |
| 0.0112 | 2.0690 | 18900 | 1.5552| 31.64 | 50.43 | 65.1959 |
| 0.0109 | 2.0799 | 19000 | 1.5548| 31.69 | 50.38 | 64.9707 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.2.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
Ariffiq99/COPA_CRAB_XLM_roberta_base_finetuned | Ariffiq99 | "2024-06-23T02:36:48Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"multiple-choice",
"generated_from_trainer",
"base_model:Ariffiq99/CRAB_xlm_roberta_base_finetuned",
"license:mit",
"endpoints_compatible",
"region:us"
] | multiple-choice | "2024-06-23T02:22:21Z" | ---
license: mit
base_model: Ariffiq99/CRAB_xlm_roberta_base_finetuned
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: COPA_CRAB_XLM_roberta_base_finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# COPA_CRAB_XLM_roberta_base_finetuned
This model is a fine-tuned version of [Ariffiq99/CRAB_xlm_roberta_base_finetuned](https://huggingface.co/Ariffiq99/CRAB_xlm_roberta_base_finetuned) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6927
- F1: 0.514
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:-----:|
| No log | 1.0 | 250 | 0.6933 | 0.496 |
| 0.7347 | 2.0 | 500 | 0.6939 | 0.52 |
| 0.7347 | 3.0 | 750 | 0.6931 | 0.486 |
| 0.7095 | 4.0 | 1000 | 0.6934 | 0.496 |
| 0.7095 | 5.0 | 1250 | 0.6926 | 0.506 |
| 0.7083 | 6.0 | 1500 | 0.6930 | 0.522 |
| 0.7083 | 7.0 | 1750 | 0.6927 | 0.514 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
Ariffiq99/COPA_CRAB_Bert_Base_Uncased_Finetuned | Ariffiq99 | "2024-06-23T02:25:56Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"multiple-choice",
"generated_from_trainer",
"base_model:Ariffiq99/CRAB_bert_base_uncased_finetuned",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | multiple-choice | "2024-06-23T02:22:26Z" | ---
license: apache-2.0
base_model: Ariffiq99/CRAB_bert_base_uncased_finetuned
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: COPA_CRAB_Bert_Base_Uncased_Finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# COPA_CRAB_Bert_Base_Uncased_Finetuned
This model is a fine-tuned version of [Ariffiq99/CRAB_bert_base_uncased_finetuned](https://huggingface.co/Ariffiq99/CRAB_bert_base_uncased_finetuned) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6690
- F1: 0.7181
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 63 | 0.6481 | 0.6345 |
| No log | 2.0 | 126 | 0.5997 | 0.6829 |
| No log | 3.0 | 189 | 0.5723 | 0.6944 |
| No log | 4.0 | 252 | 0.5751 | 0.6898 |
| No log | 5.0 | 315 | 0.5906 | 0.7149 |
| No log | 6.0 | 378 | 0.6036 | 0.7273 |
| No log | 7.0 | 441 | 0.6245 | 0.7280 |
| 0.4609 | 8.0 | 504 | 0.6476 | 0.7213 |
| 0.4609 | 9.0 | 567 | 0.6688 | 0.7181 |
| 0.4609 | 10.0 | 630 | 0.6690 | 0.7181 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
Ariffiq99/COPA_CRAB_xlm_roberta_large_finetuned | Ariffiq99 | "2024-06-23T02:36:05Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"multiple-choice",
"generated_from_trainer",
"base_model:Ariffiq99/CRAB_xlm_roberta_large_finetuned",
"license:mit",
"endpoints_compatible",
"region:us"
] | multiple-choice | "2024-06-23T02:22:47Z" | ---
license: mit
base_model: Ariffiq99/CRAB_xlm_roberta_large_finetuned
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: COPA_CRAB_xlm_roberta_large_finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# COPA_CRAB_xlm_roberta_large_finetuned
This model is a fine-tuned version of [Ariffiq99/CRAB_xlm_roberta_large_finetuned](https://huggingface.co/Ariffiq99/CRAB_xlm_roberta_large_finetuned) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6929
- F1: 0.496
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:-----:|
| No log | 1.0 | 63 | 0.6938 | 0.538 |
| No log | 2.0 | 126 | 0.6944 | 0.49 |
| No log | 3.0 | 189 | 0.6926 | 0.522 |
| No log | 4.0 | 252 | 0.6934 | 0.492 |
| No log | 5.0 | 315 | 0.6928 | 0.506 |
| No log | 6.0 | 378 | 0.6945 | 0.502 |
| No log | 7.0 | 441 | 0.6940 | 0.476 |
| 0.7077 | 8.0 | 504 | 0.6938 | 0.528 |
| 0.7077 | 9.0 | 567 | 0.6935 | 0.488 |
| 0.7077 | 10.0 | 630 | 0.6929 | 0.496 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
c00kiemaster/FabioAzevedo | c00kiemaster | "2024-06-25T00:00:10Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-23T02:29:51Z" | Entry not found |
curlybox888/peft-starcoder-lora-a100 | curlybox888 | "2024-06-23T02:31:30Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-23T02:31:30Z" | Entry not found |
Ariffiq99/COPA_CRAB_albert_base_finetuned | Ariffiq99 | "2024-06-23T02:33:59Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"albert",
"multiple-choice",
"generated_from_trainer",
"base_model:Ariffiq99/CRAB_albert_base_finetuned",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | multiple-choice | "2024-06-23T02:32:02Z" | ---
license: apache-2.0
base_model: Ariffiq99/CRAB_albert_base_finetuned
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: COPA_CRAB_albert_base_finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# COPA_CRAB_albert_base_finetuned
This model is a fine-tuned version of [Ariffiq99/CRAB_albert_base_finetuned](https://huggingface.co/Ariffiq99/CRAB_albert_base_finetuned) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4567
- F1: 0.674
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:-----:|
| No log | 1.0 | 63 | 0.6274 | 0.666 |
| No log | 2.0 | 126 | 0.5703 | 0.69 |
| No log | 3.0 | 189 | 0.6324 | 0.704 |
| No log | 4.0 | 252 | 0.7201 | 0.69 |
| No log | 5.0 | 315 | 1.0079 | 0.686 |
| No log | 6.0 | 378 | 1.1511 | 0.678 |
| No log | 7.0 | 441 | 1.2763 | 0.67 |
| 0.2791 | 8.0 | 504 | 1.3775 | 0.676 |
| 0.2791 | 9.0 | 567 | 1.4347 | 0.674 |
| 0.2791 | 10.0 | 630 | 1.4567 | 0.674 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
Mistermango24/AutismMix-PonyXL | Mistermango24 | "2024-06-23T02:52:55Z" | 0 | 0 | null | [
"license:artistic-2.0",
"region:us"
] | null | "2024-06-23T02:32:33Z" | ---
license: artistic-2.0
---
|
aisuko/gpt-2-1.5B-ggml | aisuko | "2024-06-23T03:07:48Z" | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | "2024-06-23T02:38:38Z" | ---
license: mit
---
Adapted by Aisuko
|
rmayormartins/speech-accent-pt-br-classifier | rmayormartins | "2024-06-23T03:09:09Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2",
"audio-classification",
"code",
"en",
"pt",
"dataset:mozilla-foundation/common_voice_11_0",
"license:ecl-2.0",
"endpoints_compatible",
"region:us"
] | audio-classification | "2024-06-23T02:39:12Z" | ---
license: ecl-2.0
datasets:
- mozilla-foundation/common_voice_11_0
language:
- en
- pt
metrics:
- accuracy
library_name: transformers
tags:
- code
---
# Speech Portuguese (Brazilian) Accent Classifier
🎙️🤖🇧🇷
This project is a speech accent classifier that distinguishes between Portuguese (Brazilian) and other accents.
## Project Overview
This application uses a trained model to classify speech accents into two categories:
1. Portuguese (Brazilian)
2. Other
The model is based on the author's work [results] and utilizes the Portuguese portion of the Common Voice dataset (version 11.0) from Mozilla Foundation.
## Dataset
The project uses the Portuguese subset of the Common Voice dataset:
- Dataset: "mozilla-foundation/common_voice_11_0", "pt"
Brazilian accents included in the dataset:
- Português do Brasil, Região Sul do Brasil
- Paulistano
- Paulista, Brasileiro
- Carioca
- Mato Grosso
- Mineiro
- Interior Paulista
- Gaúcho
- Nordestino
- And various regional mixes
## Model and Processor
The project utilizes the following model and processor:
- Base Model: "facebook/wav2vec2-base-960h"
- Processor: Wav2Vec2Processor.from_pretrained
## Model Versions
Was trained three versions of the model with different configurations:
1. **(OLD) v 1.1**:
- Epochs: 3
- Training samples: 1000
- Validation samples: 200
2. **(OLD) v 1.2**:
- Epochs: 10
- Training samples: 1000
- Validation samples: 500
3. **(NEW) v 1.3**:
- Epochs: 20
- Training samples: 5000
- Validation samples: 1000
All models were trained using high RAM GPU on Google Colab Pro.
## Model Structure (files)
Each version of the model includes the following files:
results config.json | preprocessor_config.json | model.safetensors | special_tokens_map.json | tokenizer_config.json | vocab.json
## How to Use
Test with recording or uploading an audio file. To test, I recommend short sentences.
## License
This project is licensed under the Eclipse Public License 2.0 (ECL-2.0).
## Developer Information
Developed by Ramon Mayor Martins (2024)
- Email: rmayormartins@gmail.com
- Homepage: https://rmayormartins.github.io/
- Twitter: @rmayormartins
- GitHub: https://github.com/rmayormartins
## Acknowledgements
Special thanks to Instituto Federal de Santa Catarina (Federal Institute of Santa Catarina) IFSC-São José-Brazil.
## Contact
For any queries or suggestions, please contact the developer using the information provided above. |
Ariffiq99/COPA_E_CARE_Bert_Base_Uncased_Finetuned | Ariffiq99 | "2024-06-23T02:50:48Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"multiple-choice",
"generated_from_trainer",
"base_model:Ariffiq99/e_care_bert_base_uncased_finetuned",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | multiple-choice | "2024-06-23T02:47:17Z" | ---
license: apache-2.0
base_model: Ariffiq99/e_care_bert_base_uncased_finetuned
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: COPA_E_CARE_Bert_Base_Uncased_Finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# COPA_E_CARE_Bert_Base_Uncased_Finetuned
This model is a fine-tuned version of [Ariffiq99/e_care_bert_base_uncased_finetuned](https://huggingface.co/Ariffiq99/e_care_bert_base_uncased_finetuned) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5892
- F1: 0.7839
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 63 | 0.5912 | 0.7258 |
| No log | 2.0 | 126 | 0.5371 | 0.7702 |
| No log | 3.0 | 189 | 0.5355 | 0.7627 |
| No log | 4.0 | 252 | 0.5373 | 0.7747 |
| No log | 5.0 | 315 | 0.5452 | 0.7832 |
| No log | 6.0 | 378 | 0.5626 | 0.7822 |
| No log | 7.0 | 441 | 0.5731 | 0.7815 |
| 0.364 | 8.0 | 504 | 0.5825 | 0.7797 |
| 0.364 | 9.0 | 567 | 0.5874 | 0.7865 |
| 0.364 | 10.0 | 630 | 0.5892 | 0.7839 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
DBangshu/Base_gemma_e5_2_2 | DBangshu | "2024-06-23T02:49:27Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-06-23T02:47:21Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Ariffiq99/COPA_E_CARE_xlm_roberta_large_finetuned | Ariffiq99 | "2024-06-23T05:17:51Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"multiple-choice",
"generated_from_trainer",
"base_model:Ariffiq99/e_care_xlm_roberta_large_finetuned",
"license:mit",
"endpoints_compatible",
"region:us"
] | multiple-choice | "2024-06-23T02:49:39Z" | ---
license: mit
base_model: Ariffiq99/e_care_xlm_roberta_large_finetuned
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: COPA_E_CARE_xlm_roberta_large_finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# COPA_E_CARE_xlm_roberta_large_finetuned
This model is a fine-tuned version of [Ariffiq99/e_care_xlm_roberta_large_finetuned](https://huggingface.co/Ariffiq99/e_care_xlm_roberta_large_finetuned) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6533
- F1: 0.82
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 63 | 0.4427 | 0.796 |
| No log | 2.0 | 126 | 0.3930 | 0.8140 |
| No log | 3.0 | 189 | 0.3833 | 0.83 |
| No log | 4.0 | 252 | 0.4824 | 0.806 |
| No log | 5.0 | 315 | 0.4836 | 0.82 |
| No log | 6.0 | 378 | 0.5445 | 0.8140 |
| No log | 7.0 | 441 | 0.5651 | 0.816 |
| 0.2666 | 8.0 | 504 | 0.5990 | 0.822 |
| 0.2666 | 9.0 | 567 | 0.6457 | 0.826 |
| 0.2666 | 10.0 | 630 | 0.6533 | 0.82 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
Ariffiq99/COPA_E_CARE_XLM_roberta_base_finetuned | Ariffiq99 | "2024-06-23T03:10:43Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"multiple-choice",
"generated_from_trainer",
"base_model:Ariffiq99/e_care_xlm_roberta_base_finetuned",
"license:mit",
"endpoints_compatible",
"region:us"
] | multiple-choice | "2024-06-23T02:49:56Z" | ---
license: mit
base_model: Ariffiq99/e_care_xlm_roberta_base_finetuned
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: COPA_E_CARE_XLM_roberta_base_finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# COPA_E_CARE_XLM_roberta_base_finetuned
This model is a fine-tuned version of [Ariffiq99/e_care_xlm_roberta_base_finetuned](https://huggingface.co/Ariffiq99/e_care_xlm_roberta_base_finetuned) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7353
- F1: 0.67
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:-----:|
| No log | 1.0 | 250 | 0.9393 | 0.668 |
| 0.2405 | 2.0 | 500 | 1.1072 | 0.684 |
| 0.2405 | 3.0 | 750 | 1.1488 | 0.648 |
| 0.3938 | 4.0 | 1000 | 0.9665 | 0.652 |
| 0.3938 | 5.0 | 1250 | 1.2431 | 0.654 |
| 0.4987 | 6.0 | 1500 | 1.2893 | 0.674 |
| 0.4987 | 7.0 | 1750 | 1.5097 | 0.68 |
| 0.3532 | 8.0 | 2000 | 1.6731 | 0.664 |
| 0.3532 | 9.0 | 2250 | 1.7344 | 0.664 |
| 0.3011 | 10.0 | 2500 | 1.7353 | 0.67 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
friendlyguy774/LoRA_Adapters | friendlyguy774 | "2024-06-23T02:54:38Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-23T02:54:38Z" | Entry not found |
Ariffiq99/COPA_E_CARE_albert_base_finetuned | Ariffiq99 | "2024-06-23T02:56:04Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"albert",
"multiple-choice",
"generated_from_trainer",
"base_model:Ariffiq99/e_care_albert_base_finetuned",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | multiple-choice | "2024-06-23T02:55:01Z" | ---
license: apache-2.0
base_model: Ariffiq99/e_care_albert_base_finetuned
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: COPA_E_CARE_albert_base_finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# COPA_E_CARE_albert_base_finetuned
This model is a fine-tuned version of [Ariffiq99/e_care_albert_base_finetuned](https://huggingface.co/Ariffiq99/e_care_albert_base_finetuned) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8976
- F1: 0.732
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 63 | 0.5300 | 0.74 |
| No log | 2.0 | 126 | 0.5038 | 0.76 |
| No log | 3.0 | 189 | 0.5805 | 0.75 |
| No log | 4.0 | 252 | 0.5694 | 0.7700 |
| No log | 5.0 | 315 | 0.6823 | 0.74 |
| No log | 6.0 | 378 | 0.7699 | 0.7420 |
| No log | 7.0 | 441 | 0.7680 | 0.754 |
| 0.2122 | 8.0 | 504 | 0.8489 | 0.738 |
| 0.2122 | 9.0 | 567 | 0.8899 | 0.7300 |
| 0.2122 | 10.0 | 630 | 0.8976 | 0.732 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
manbeast3b/KinoInferTry11 | manbeast3b | "2024-06-23T03:00:50Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-23T03:00:45Z" | Entry not found |
CreitinhoDublagems/Multiple_EminemEras | CreitinhoDublagems | "2024-06-24T01:02:38Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-23T03:04:42Z" | Entry not found |
Fiqking11/Fiqking | Fiqking11 | "2024-06-23T03:07:18Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2024-06-23T03:07:18Z" | ---
license: apache-2.0
---
|
Syldehayem/syl_finetune | Syldehayem | "2024-06-23T03:10:14Z" | 0 | 0 | null | [
"license:unknown",
"region:us"
] | null | "2024-06-23T03:10:14Z" | ---
license: unknown
---
|
sasmita-datascientist/dummy | sasmita-datascientist | "2024-06-23T03:12:20Z" | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | "2024-06-23T03:12:20Z" | ---
license: mit
---
|
hasininawoda/persondis | hasininawoda | "2024-06-23T03:13:27Z" | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | "2024-06-23T03:13:08Z" | ---
base_model: stabilityai/stable-diffusion-xl-base-1.0
library_name: diffusers
license: openrail++
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
instance_prompt: a photo of disable person
widget: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - hasininawoda/persondis
<Gallery />
## Model description
These are hasininawoda/persondis LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of disable person to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](hasininawoda/persondis/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
mirobot/myfirstrepo | mirobot | "2024-06-23T03:14:20Z" | 0 | 0 | null | [
"license:afl-3.0",
"region:us"
] | null | "2024-06-23T03:14:20Z" | ---
license: afl-3.0
---
|
mhMH2512/12 | mhMH2512 | "2024-06-23T03:17:03Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2024-06-23T03:17:03Z" | ---
license: openrail
---
|
abinavGanesh/JiuZhang3.0-Corpus-SFT | abinavGanesh | "2024-06-23T03:20:46Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-23T03:20:46Z" | Entry not found |
grimjim/llama-3-Nephilim-v1-8B-6.5bpw_h6_exl2 | grimjim | "2024-06-23T03:28:44Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0",
"base_model:mlabonne/NeuralDaredevil-8B-abliterated",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"exl2",
"region:us"
] | text-generation | "2024-06-23T03:24:35Z" | ---
base_model:
- WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0
- mlabonne/NeuralDaredevil-8B-abliterated
library_name: transformers
tags:
- mergekit
- merge
license: cc-by-nc-4.0
pipeline_tag: text-generation
---
# llama-3-Nephilim-v1-8B
This repo contains a 6.5bpw exl2 quant of [grimjim/llama-3-Nephilim-v1-8B](https://huggingface.co/grimjim/llama-3-Nephilim-v1-8B).
The originating model is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
Here we experiment with SLERP merger with the second model at very low weight (0.001) to modulate the output of the base model.
The base model was assembled to achieve high MMLU while avoiding refusals, while the additional model was trained specifically (apparently as a copilot) for offensive and defensive cybersecurity. Though neither model targeted roleplay as a use case, the resulting intelligence, acuity, and text generation of the merge is of interest. The merge is aggressively creative, within bounds.
Tested with temperature=1.0-1.2 and minP=0.01 along with a custom Instruct prompt geared toward reducing refusals during roleplay text generation without compromising overall model safety: [Llama 3 Instruct Direct](https://huggingface.co/debased-ai/SillyTavern-settings/tree/main/advanced_formatting/instruct_mode).
Care should be taken when using this model, as it is possible that harmful outputs could be generated. Given that this model is derivative, responsible use is further mandated by the WhiteRabbitNeo Usage Restrictions Extension to the Llama-3 License. This model is further subject to CC-BY-NC-4.0 by default, meaning that commercial use is restricted, barring an alternative licensing agreement.
- [GGUF quants (w/ llama-bpe pre-tokenizer)](https://huggingface.co/grimjim/llama-3-Nephilim-v1-8B-GGUF)
Built with Meta Llama 3.
# WhiteRabbitNeo Extension to Llama-3 Licence: Usage Restrictions
```
You agree not to use the Model or Derivatives of the Model:
- In any way that violates any applicable national or international law or regulation or infringes upon the lawful rights and interests of any third party;
- For military use in any way;
- For the purpose of exploiting, harming or attempting to exploit or harm minors in any way;
- To generate or disseminate verifiably false information and/or content with the purpose of harming others;
- To generate or disseminate inappropriate content subject to applicable regulatory requirements;
- To generate or disseminate personal identifiable information without due authorization or for unreasonable use;
- To defame, disparage or otherwise harass others;
- For fully automated decision making that adversely impacts an individual's legal rights or otherwise creates or modifies a binding, enforceable obligation;
- For any use intended to or which has the effect of discriminating against or harming individuals or groups based on online or offline social behavior or known or predicted personal or personality characteristics;
- To exploit any of the vulnerabilities of a specific group of persons based on their age, social, physical or mental characteristics, in order to materially distort the behavior of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm;
- For any use intended to or which has the effect of discriminating against individuals or groups based on legally protected characteristics or categories.
```
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0](https://huggingface.co/WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0)
* [mlabonne/NeuralDaredevil-8B-abliterated](https://huggingface.co/mlabonne/NeuralDaredevil-8B-abliterated)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: mlabonne/NeuralDaredevil-8B-abliterated
layer_range: [0,32]
- model: WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0
layer_range: [0,32]
merge_method: slerp
base_model: mlabonne/NeuralDaredevil-8B-abliterated
parameters:
t:
- value: 0.001
dtype: bfloat16
```
|
Suras2/Tchan | Suras2 | "2024-06-23T03:30:45Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2024-06-23T03:29:51Z" | ---
license: apache-2.0
---
|
shinieaggarwal/sematic-segmentation-model | shinieaggarwal | "2024-06-23T03:31:15Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-23T03:31:15Z" | Entry not found |
Replete-AI/Adapter_For_Replete-Coder-Qwen2-1.5b | Replete-AI | "2024-06-23T04:52:54Z" | 0 | 1 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"dataset:Replete-AI/code_bagel_hermes-2.5",
"dataset:Replete-AI/code_bagel",
"dataset:Replete-AI/OpenHermes-2.5-Uncensored",
"dataset:teknium/OpenHermes-2.5",
"dataset:layoric/tiny-codes-alpaca",
"dataset:glaiveai/glaive-code-assistant-v3",
"dataset:ajibawa-2023/Code-290k-ShareGPT",
"dataset:TIGER-Lab/MathInstruct",
"dataset:chargoddard/commitpack-ft-instruct-rated",
"dataset:iamturun/code_instructions_120k_alpaca",
"dataset:ise-uiuc/Magicoder-Evol-Instruct-110K",
"dataset:cognitivecomputations/dolphin-coder",
"dataset:nickrosh/Evol-Instruct-Code-80k-v1",
"dataset:coseal/CodeUltraFeedback_binarized",
"dataset:glaiveai/glaive-function-calling-v2",
"dataset:CyberNative/Code_Vulnerability_Security_DPO",
"dataset:jondurbin/airoboros-2.2",
"dataset:camel-ai",
"dataset:lmsys/lmsys-chat-1m",
"dataset:CollectiveCognition/chats-data-2023-09-22",
"dataset:CoT-Alpaca-GPT4",
"dataset:WizardLM/WizardLM_evol_instruct_70k",
"dataset:WizardLM/WizardLM_evol_instruct_V2_196k",
"dataset:teknium/GPT4-LLM-Cleaned",
"dataset:GPTeacher",
"dataset:OpenGPT",
"dataset:meta-math/MetaMathQA",
"dataset:Open-Orca/SlimOrca",
"dataset:garage-bAInd/Open-Platypus",
"dataset:anon8231489123/ShareGPT_Vicuna_unfiltered",
"dataset:Unnatural-Instructions-GPT4",
"base_model:Qwen/Qwen2-1.5B",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | "2024-06-23T03:35:28Z" | ---
license: apache-2.0
base_model: Qwen/Qwen2-1.5B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
datasets:
- Replete-AI/code_bagel_hermes-2.5
- Replete-AI/code_bagel
- Replete-AI/OpenHermes-2.5-Uncensored
- teknium/OpenHermes-2.5
- layoric/tiny-codes-alpaca
- glaiveai/glaive-code-assistant-v3
- ajibawa-2023/Code-290k-ShareGPT
- TIGER-Lab/MathInstruct
- chargoddard/commitpack-ft-instruct-rated
- iamturun/code_instructions_120k_alpaca
- ise-uiuc/Magicoder-Evol-Instruct-110K
- cognitivecomputations/dolphin-coder
- nickrosh/Evol-Instruct-Code-80k-v1
- coseal/CodeUltraFeedback_binarized
- glaiveai/glaive-function-calling-v2
- CyberNative/Code_Vulnerability_Security_DPO
-
jondurbin/airoboros-2.2
- camel-ai
- lmsys/lmsys-chat-1m
- CollectiveCognition/chats-data-2023-09-22
- CoT-Alpaca-GPT4
- WizardLM/WizardLM_evol_instruct_70k
- WizardLM/WizardLM_evol_instruct_V2_196k
- teknium/GPT4-LLM-Cleaned
- GPTeacher
- OpenGPT
- meta-math/MetaMathQA
- Open-Orca/SlimOrca
- garage-bAInd/Open-Platypus
- anon8231489123/ShareGPT_Vicuna_unfiltered
- Unnatural-Instructions-GPT4
model-index:
- name: Replete-Coder-llama3-8b
results:
- task:
name: HumanEval
type: text-generation
dataset:
type: openai_humaneval
name: HumanEval
metrics:
- name: pass@1
type: pass@1
value:
verified: false
- task:
name: AI2 Reasoning Challenge
type: text-generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: accuracy
value:
name: normalized accuracy
source:
url: https://www.placeholderurl.com
name: Open LLM Leaderboard
- task:
name: Text Generation
type: text-generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: accuracy
value:
name: normalized accuracy
source:
url: https://www.placeholderurl.com
name: Open LLM Leaderboard
- task:
name: Text Generation
type: text-generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: accuracy
value:
name: accuracy
source:
url: https://www.placeholderurl.com
name: Open LLM Leaderboard
- task:
name: Text Generation
type: text-generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: multiple_choice_accuracy
value:
source:
url: https://www.placeholderurl.com
name: Open LLM Leaderboard
- task:
name: Text Generation
type: text-generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: accuracy
value:
name: accuracy
source:
url: https://www.placeholderurl.com
name: Open LLM Leaderboard
- task:
name: Text Generation
type: text-generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: accuracy
value:
name: accuracy
source:
url: https://www.placeholderurl.com
name: Open LLM Leaderboard
---
This is the adapter (Aka Lora) For the Replete-AI/Replete-Coder-Qwen-1.5b AI model. Feel free to use it to attach to your own models and see how it performs.
We'd love to hear about your models that you create with this adapter in the community posts!
Link for the original model bellow:
- https://huggingface.co/Replete-AI/Replete-Coder-Qwen-1.5b
___________________________________________________________________
# Original Model card
# Replete-Coder-Qwen2-1.5b
Finetuned by: Rombodawg
### More than just a coding model!
Although Replete-Coder has amazing coding capabilities, its trained on vaste amount of non-coding data, fully cleaned and uncensored. Dont just use it for coding, use it for all your needs! We are truly trying to make the GPT killer!
![image/png](https://cdn-uploads.huggingface.co/production/uploads/642cc1c253e76b4c2286c58e/-0dERC793D9XeFsJ9uHbx.png)
Thank you to TensorDock for sponsoring Replete-Coder-llama3-8b and Replete-Coder-Qwen2-1.5b
you can check out their website for cloud compute rental bellow.
- https://tensordock.com
__________________________________________________________________________________________________
Replete-Coder-Qwen2-1.5b is a general purpose model that is specially trained in coding in over 100 coding languages. The data used to train the model contains 25% non-code instruction data and 75% coding instruction data totaling up to 3.9 million lines, roughly 1 billion tokens, or 7.27gb of instruct data. The data used to train this model was 100% uncensored, then fully deduplicated, before training happened.
The Replete-Coder models (including Replete-Coder-llama3-8b and Replete-Coder-Qwen2-1.5b) feature the following:
- Advanced coding capabilities in over 100 coding languages
- Advanced code translation (between languages)
- Security and vulnerability prevention related coding capabilities
- General purpose use
- Uncensored use
- Function calling
- Advanced math use
- Use on low end (8b) and mobile (1.5b) platforms
Notice: Replete-Coder series of models are fine-tuned on a context window of 8192 tokens. Performance past this context window is not guaranteed.
__________________________________________________________________________________________________
You can find the 25% non-coding instruction below:
- https://huggingface.co/datasets/Replete-AI/OpenHermes-2.5-Uncensored
And the 75% coding specific instruction data below:
- https://huggingface.co/datasets/Replete-AI/code_bagel
These two datasets were combined to create the final dataset for training, which is linked below:
- https://huggingface.co/datasets/Replete-AI/code_bagel_hermes-2.5
__________________________________________________________________________________________________
## Prompt Template: ChatML
```
<|im_start|>system
{}<|im_end|>
<|im_start|>user
{}<|im_end|>
<|im_start|>assistant
{}
```
Note: The system prompt varies in training data, but the most commonly used one is:
```
Below is an instruction that describes a task, Write a response that appropriately completes the request.
```
End token:
```
<|endoftext|>
```
__________________________________________________________________________________________________
Thank you to the community for your contributions to the Replete-AI/code_bagel_hermes-2.5 dataset. Without the participation of so many members making their datasets free and open source for any to use, this amazing AI model wouldn't be possible.
Extra special thanks to Teknium for the Open-Hermes-2.5 dataset and jondurbin for the bagel dataset and the naming idea for the code_bagel series of datasets. You can find both of their huggingface accounts linked below:
- https://huggingface.co/teknium
- https://huggingface.co/jondurbin
Another special thanks to unsloth for being the main method of training for Replete-Coder. Bellow you can find their github, as well as the special Replete-Ai secret sause (Unsloth + Qlora + Galore) colab code document that was used to train this model.
- https://github.com/unslothai/unsloth
- https://colab.research.google.com/drive/1eXGqy5M--0yW4u0uRnmNgBka-tDk2Li0?usp=sharing
__________________________________________________________________________________________________
## Join the Replete-Ai discord! We are a great and Loving community!
- https://discord.gg/ZZbnsmVnjD
|
DBangshu/Base_gemma_e5_3_2 | DBangshu | "2024-06-23T03:37:57Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-06-23T03:35:44Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Zack157/ModelosdeVozIA | Zack157 | "2024-06-29T20:20:38Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-23T03:35:54Z" | Entry not found |
SunShineFlower/Heizou_500epoch | SunShineFlower | "2024-06-23T03:44:07Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2024-06-23T03:42:58Z" | ---
license: openrail
---
|
luishcarvalho/llama_smart_contract_2 | luishcarvalho | "2024-06-23T03:45:10Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-06-23T03:43:11Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mjjj7/mjhistoryera | mjjj7 | "2024-06-23T03:44:12Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2024-06-23T03:43:26Z" | ---
license: openrail
---
|
SunShineFlower/BlackSwan_800epoch | SunShineFlower | "2024-06-23T03:46:36Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2024-06-23T03:45:04Z" | ---
license: openrail
---
|
DokiQueen/Mori-Calliope | DokiQueen | "2024-06-23T03:49:38Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-23T03:47:34Z" | Entry not found |
sirishgam001/videomae-base-finetuned-ucf101-subset | sirishgam001 | "2024-06-23T07:21:21Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"videomae",
"video-classification",
"endpoints_compatible",
"region:us"
] | video-classification | "2024-06-23T03:50:42Z" | Entry not found |
alphaveneno/distilbert-base-uncased-finetuned-imdb | alphaveneno | "2024-06-23T04:06:05Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"fill-mask",
"generated_from_trainer",
"base_model:distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2024-06-23T03:51:39Z" | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4891
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.6814 | 1.0 | 157 | 2.5118 |
| 2.5912 | 2.0 | 314 | 2.4497 |
| 2.528 | 3.0 | 471 | 2.4822 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.1+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
1xlc/korean | 1xlc | "2024-06-23T03:53:33Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-23T03:52:43Z" | Entry not found |
lilrag10/lmd_8bars_tokenizer | lilrag10 | "2024-06-23T03:53:46Z" | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-06-23T03:53:46Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
jiangcongtao/autotrain-phi3-mini-4k-instruct-oqkv | jiangcongtao | "2024-06-23T04:07:39Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"autotrain",
"text-generation-inference",
"text-generation",
"peft",
"conversational",
"dataset:dair-ai/emotion",
"base_model:microsoft/Phi-3-mini-4k-instruct",
"license:other",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-06-23T03:59:25Z" | ---
tags:
- autotrain
- text-generation-inference
- text-generation
- peft
library_name: transformers
base_model: microsoft/Phi-3-mini-4k-instruct
widget:
- messages:
- role: user
content: What is your favorite condiment?
license: other
datasets:
- dair-ai/emotion
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
``` |
bdo1305/gemma-2b | bdo1305 | "2024-06-23T04:12:07Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-23T04:06:35Z" | Entry not found |
lucasius/git-base-pokemon | lucasius | "2024-06-23T05:57:14Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"git",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-06-23T04:06:51Z" | Entry not found |
copybaiter/lora | copybaiter | "2024-06-23T11:25:21Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-23T04:12:05Z" | Entry not found |
YaPyknyl/Ferryman | YaPyknyl | "2024-06-23T04:19:15Z" | 0 | 0 | null | [
"en",
"license:cc-by-nc-4.0",
"region:us"
] | null | "2024-06-23T04:12:57Z" | ---
license: cc-by-nc-4.0
language:
- en
---
I did this cuz did not found model on AI HUB discord server
![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/64a21bfbf09ad861c84d2321/WicgQPB-SUunF_AfuEN2s.jpeg)
|
karthikmit/openai-whisper-medium-FT-v1 | karthikmit | "2024-06-23T04:14:14Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-23T04:14:14Z" | Entry not found |
NormalUserrname/adssadasddas | NormalUserrname | "2024-06-23T04:18:52Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-23T04:18:52Z" | Entry not found |
DBangshu/Base_gemma_e5_4_2 | DBangshu | "2024-06-23T04:26:45Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-06-23T04:24:31Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |