modelId
stringlengths 5
122
| author
stringlengths 2
42
| last_modified
unknown | downloads
int64 0
728M
| likes
int64 0
10.9k
| library_name
stringclasses 238
values | tags
sequencelengths 1
2.16k
| pipeline_tag
stringclasses 48
values | createdAt
unknown | card
stringlengths 1
901k
|
---|---|---|---|---|---|---|---|---|---|
fleonce/dnrti-t5-base | fleonce | "2024-06-20T09:39:37Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"iter",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-06-20T09:36:45Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
DBangshu/gemma_e5_4_4 | DBangshu | "2024-06-20T09:41:26Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-06-20T09:38:43Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
AmanCode22/CHATGEM | AmanCode22 | "2024-06-20T09:43:22Z" | 0 | 0 | transformers | [
"transformers",
"text-generation",
"en",
"dataset:OpenGVLab/ShareGPT-4o",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-06-20T09:40:19Z" | ---
datasets:
- OpenGVLab/ShareGPT-4o
language:
- en
metrics:
- accuracy
library_name: transformers
pipeline_tag: text-generation
--- |
abdfajar707/llama3_8B_lora_model_rkp_v3 | abdfajar707 | "2024-06-20T09:41:56Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-20T09:41:43Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** abdfajar707
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
limaatulya/my_awesome_billsum_model_36 | limaatulya | "2024-06-20T09:46:35Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | "2024-06-20T09:45:40Z" | ---
license: apache-2.0
base_model: google-t5/t5-small
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: my_awesome_billsum_model_36
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_billsum_model_36
This model is a fine-tuned version of [google-t5/t5-small](https://huggingface.co/google-t5/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4601
- Rouge1: 0.9721
- Rouge2: 0.8819
- Rougel: 0.9256
- Rougelsum: 0.9271
- Gen Len: 4.9167
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 12 | 1.9874 | 0.4145 | 0.2913 | 0.3883 | 0.3891 | 17.6042 |
| No log | 2.0 | 24 | 1.4300 | 0.4322 | 0.3091 | 0.4061 | 0.4068 | 17.0833 |
| No log | 3.0 | 36 | 0.9451 | 0.5076 | 0.3886 | 0.4814 | 0.48 | 14.75 |
| No log | 4.0 | 48 | 0.6345 | 0.8401 | 0.7297 | 0.7858 | 0.7884 | 7.625 |
| No log | 5.0 | 60 | 0.5226 | 0.9591 | 0.8586 | 0.8998 | 0.9042 | 5.125 |
| No log | 6.0 | 72 | 0.4907 | 0.9701 | 0.8736 | 0.9129 | 0.9167 | 4.8958 |
| No log | 7.0 | 84 | 0.4783 | 0.9701 | 0.8736 | 0.9129 | 0.9167 | 4.8958 |
| No log | 8.0 | 96 | 0.4697 | 0.9721 | 0.8819 | 0.9256 | 0.9271 | 4.9167 |
| No log | 9.0 | 108 | 0.4627 | 0.9721 | 0.8819 | 0.9256 | 0.9271 | 4.9167 |
| No log | 10.0 | 120 | 0.4601 | 0.9721 | 0.8819 | 0.9256 | 0.9271 | 4.9167 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
zakariyafirachine/Recommendation_using_t5___ | zakariyafirachine | "2024-06-20T09:47:01Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | "2024-06-20T09:46:01Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
IsakNordgren/Llama-3-mistral-sum | IsakNordgren | "2024-06-20T09:46:24Z" | 0 | 0 | null | [
"merge",
"mergekit",
"lazymergekit",
"Labagaite/mistral-Summarizer-7b-instruct-v0.2",
"base_model:Labagaite/mistral-Summarizer-7b-instruct-v0.2",
"region:us"
] | null | "2024-06-20T09:46:23Z" | ---
base_model:
- Labagaite/mistral-Summarizer-7b-instruct-v0.2
tags:
- merge
- mergekit
- lazymergekit
- Labagaite/mistral-Summarizer-7b-instruct-v0.2
---
# Llama-3-mistral-sum
Llama-3-mistral-sum is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [Labagaite/mistral-Summarizer-7b-instruct-v0.2](https://huggingface.co/Labagaite/mistral-Summarizer-7b-instruct-v0.2)
## 𧩠Configuration
```yaml
models:
- model: AI-Sweden-Models/Llama-3-8B-instruct
# No parameters necessary for base model
- model: Labagaite/mistral-Summarizer-7b-instruct-v0.2
parameters:
density: 0.53
weight: 0.6
merge_method: dare_ties
base_model: AI-Sweden-Models/Llama-3-8B-instruct
parameters:
int8_mask: true
dtype: bfloat16
```
## π» Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "IsakNordgren/Llama-3-mistral-sum"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
satish99017/gpt2-reuters-tokenizer | satish99017 | "2024-06-20T09:46:57Z" | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-06-20T09:46:56Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ChatK/my_awesome_eli5_clm-model | ChatK | "2024-06-20T09:47:05Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-20T09:47:05Z" | Entry not found |
Ksgk-fy/genius_helper_v1_Maria_product_v1 | Ksgk-fy | "2024-06-20T10:40:34Z" | 0 | 0 | null | [
"safetensors",
"region:us"
] | null | "2024-06-20T09:50:33Z" | Entry not found |
starfishvenus/melili-experiment | starfishvenus | "2024-06-20T09:57:27Z" | 0 | 0 | null | [
"license:unknown",
"region:us"
] | null | "2024-06-20T09:53:20Z" | ---
license: unknown
---
|
Jamin20/Baseline_DE | Jamin20 | "2024-06-20T09:56:34Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-20T09:56:34Z" | Entry not found |
Jamin20/Audio-based_Augment_DE | Jamin20 | "2024-06-20T09:57:20Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-20T09:57:19Z" | Entry not found |
JamesSpray/llama-2-7b-chat-bnb-4bit-ift-dpo-002 | JamesSpray | "2024-06-20T09:58:31Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-06-20T09:57:24Z" | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Jamin20/SpecAugment_L_DE | Jamin20 | "2024-06-20T09:57:30Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-20T09:57:29Z" | Entry not found |
Jamin20/MixSpeech_DE | Jamin20 | "2024-06-20T09:57:44Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-20T09:57:43Z" | Entry not found |
Jamin20/Audio-based_Augment_SpecAugment_L_DE | Jamin20 | "2024-06-20T09:57:53Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-20T09:57:53Z" | Entry not found |
Jamin20/Audio-based_Augment_MixSpeech_DE | Jamin20 | "2024-06-20T09:58:03Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-20T09:58:03Z" | Entry not found |
Jamin20/MixSpeech_SpecAugment_L_DE | Jamin20 | "2024-06-20T09:58:15Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-20T09:58:14Z" | Entry not found |
AdamKasumovic/phi3-mini-4k-instruct-bactrian-x-xh-100-percent-low-med-high-nv-embed | AdamKasumovic | "2024-06-20T10:00:50Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/Phi-3-mini-4k-instruct-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-06-20T09:58:33Z" | ---
base_model: unsloth/Phi-3-mini-4k-instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
---
# Uploaded model
- **Developed by:** AdamKasumovic
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Phi-3-mini-4k-instruct-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
nianlong/memsum-cnndm-summarization | nianlong | "2024-06-20T10:01:37Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2024-06-20T10:00:43Z" | ---
license: apache-2.0
---
|
cosmic-God/Exp1 | cosmic-God | "2024-06-20T10:01:48Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-20T10:01:48Z" | Entry not found |
morturr/flan-t5-small-amazon-text-classification-2024-06-20 | morturr | "2024-06-20T11:16:44Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text-classification",
"generated_from_trainer",
"base_model:google/flan-t5-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-classification | "2024-06-20T10:02:05Z" | ---
license: apache-2.0
base_model: google/flan-t5-small
tags:
- generated_from_trainer
model-index:
- name: flan-t5-small-amazon-text-classification-2024-06-20
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-small-amazon-text-classification-2024-06-20
This model is a fine-tuned version of [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.39.2
- Pytorch 2.3.1+cu121
- Datasets 2.10.1
- Tokenizers 0.15.2
|
creativeforce/model-change-fsnv | creativeforce | "2024-06-20T10:28:40Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-20T10:02:08Z" | Entry not found |
Floriankidev/swin-tiny-patch4-window7-224-finetuned-eurosat-finetuned-eurosat | Floriankidev | "2024-06-20T10:47:16Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"swin",
"image-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2024-06-20T10:02:45Z" | Entry not found |
basakdemirok/bert-base-multilingual-cased-off_detect_v0_seed42 | basakdemirok | "2024-06-20T10:28:22Z" | 0 | 0 | transformers | [
"transformers",
"tf",
"tensorboard",
"bert",
"text-classification",
"generated_from_keras_callback",
"base_model:google-bert/bert-base-multilingual-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-06-20T10:04:08Z" | ---
license: apache-2.0
base_model: google-bert/bert-base-multilingual-cased
tags:
- generated_from_keras_callback
model-index:
- name: basakdemirok/bert-base-multilingual-cased-off_detect_v0_seed42
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# basakdemirok/bert-base-multilingual-cased-off_detect_v0_seed42
This model is a fine-tuned version of [google-bert/bert-base-multilingual-cased](https://huggingface.co/google-bert/bert-base-multilingual-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1350
- Validation Loss: 0.4174
- Train F1: 0.6091
- Epoch: 3
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 7488, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train F1 | Epoch |
|:----------:|:---------------:|:--------:|:-----:|
| 0.4017 | 0.3430 | 0.5471 | 0 |
| 0.2964 | 0.3350 | 0.6146 | 1 |
| 0.2104 | 0.3516 | 0.5968 | 2 |
| 0.1350 | 0.4174 | 0.6091 | 3 |
### Framework versions
- Transformers 4.41.2
- TensorFlow 2.13.1
- Datasets 2.4.0
- Tokenizers 0.19.1
|
jihyunnn/sceejayRVC | jihyunnn | "2024-06-20T10:05:13Z" | 0 | 0 | null | [
"license:unknown",
"region:us"
] | null | "2024-06-20T10:05:13Z" | ---
license: unknown
---
|
varun-v-rao/gpt2-large-bn-adapter-7.42M-snli-model3 | varun-v-rao | "2024-06-20T13:47:44Z" | 0 | 0 | null | [
"tensorboard",
"region:us"
] | null | "2024-06-20T10:06:39Z" | Entry not found |
AshuDon/Ashu_Voice | AshuDon | "2024-06-20T10:15:40Z" | 0 | 0 | null | [
"license:unlicense",
"region:us"
] | null | "2024-06-20T10:07:33Z" | ---
license: unlicense
---
|
alex-abb/Classifier | alex-abb | "2024-06-20T10:08:39Z" | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | "2024-06-20T10:08:39Z" | ---
license: mit
---
|
ILKT/2024-06-20_12-08-45 | ILKT | "2024-06-20T10:11:18Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"ILKT",
"feature-extraction",
"custom_code",
"arxiv:1910.09700",
"region:us"
] | feature-extraction | "2024-06-20T10:10:09Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
hanungaddi/k3_yolov10 | hanungaddi | "2024-06-20T10:11:31Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-20T10:10:41Z" | Entry not found |
Wwzl/modely | Wwzl | "2024-06-20T10:11:13Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2024-06-20T10:11:13Z" | ---
license: apache-2.0
---
|
FazleHasan191/paligemma_attire_200_448 | FazleHasan191 | "2024-06-21T08:10:46Z" | 0 | 0 | null | [
"tensorboard",
"safetensors",
"region:us"
] | null | "2024-06-20T10:14:31Z" | Entry not found |
manbeast3b/KinoInferL25 | manbeast3b | "2024-06-20T10:14:56Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-20T10:14:51Z" | Entry not found |
richardlastrucci/m2m100-afr-zul | richardlastrucci | "2024-06-20T11:29:21Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"m2m_100",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2024-06-20T10:16:43Z" | Entry not found |
Kitajiang/qwen2_fine_3 | Kitajiang | "2024-06-20T10:17:11Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-20T10:17:10Z" | Entry not found |
Kitajiang/qwen2_fine_4 | Kitajiang | "2024-06-20T10:17:33Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-20T10:17:33Z" | Entry not found |
mandarchaudharii/maintenancebot_gptadded | mandarchaudharii | "2024-06-20T10:29:59Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-06-20T10:18:35Z" | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
MohamedAshour1993/Bid_Master_v1 | MohamedAshour1993 | "2024-06-20T10:45:11Z" | 0 | 0 | null | [
"tensorboard",
"safetensors",
"region:us"
] | null | "2024-06-20T10:22:36Z" | Entry not found |
njaana/phi3-mini-new-model-lora-adapters | njaana | "2024-06-20T10:23:49Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/phi-3-mini-4k-instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-20T10:23:33Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
base_model: unsloth/phi-3-mini-4k-instruct-bnb-4bit
---
# Uploaded model
- **Developed by:** njaana
- **License:** apache-2.0
- **Finetuned from model :** unsloth/phi-3-mini-4k-instruct-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
FazleHasan191/paligemma_attire_200_224 | FazleHasan191 | "2024-06-20T11:43:53Z" | 0 | 0 | null | [
"tensorboard",
"safetensors",
"region:us"
] | null | "2024-06-20T10:23:36Z" | Entry not found |
mrr-codes/q-FrozenLake-v1-4x4-noSlippery | mrr-codes | "2024-06-20T10:28:29Z" | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2024-06-20T10:28:25Z" | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="mrr-codes/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
soye/vaiv_llm_contest3 | soye | "2024-06-21T14:33:08Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-20T10:30:07Z" | Entry not found |
varun-v-rao/gpt2-large-bn-adapter-7.42M-squad-model2 | varun-v-rao | "2024-06-20T15:57:06Z" | 0 | 0 | null | [
"tensorboard",
"generated_from_trainer",
"dataset:varun-v-rao/squad",
"base_model:openai-community/gpt2-large",
"license:mit",
"region:us"
] | null | "2024-06-20T10:30:20Z" | ---
license: mit
base_model: openai-community/gpt2-large
tags:
- generated_from_trainer
datasets:
- varun-v-rao/squad
model-index:
- name: gpt2-large-bn-adapter-7.42M-squad-model2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-large-bn-adapter-7.42M-squad-model2
This model is a fine-tuned version of [openai-community/gpt2-large](https://huggingface.co/openai-community/gpt2-large) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 15
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
BRIAN12682/Natural-Language-Explanations | BRIAN12682 | "2024-06-20T10:31:50Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-20T10:31:50Z" | Entry not found |
GAI-LLM/myungdonggil_2 | GAI-LLM | "2024-06-20T10:33:00Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-20T10:33:00Z" | Entry not found |
rnaveensrinivas/phi-2-retrained_network_corpus | rnaveensrinivas | "2024-06-20T10:35:54Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-20T10:35:54Z" | Entry not found |
sleephashira/Accident-LLAMA3-Unsloth | sleephashira | "2024-06-20T10:37:32Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-20T10:37:02Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** sleephashira
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
rnaveensrinivas/gemma-2b-it-retrained_network_corpus | rnaveensrinivas | "2024-06-20T11:08:15Z" | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:google/gemma-2b-it",
"license:gemma",
"region:us"
] | null | "2024-06-20T10:39:42Z" | ---
license: gemma
library_name: peft
tags:
- generated_from_trainer
base_model: google/gemma-2b-it
model-index:
- name: gemma-2b-it-retrained_network_corpus
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gemma-2b-it-retrained_network_corpus
This model is a fine-tuned version of [google/gemma-2b-it](https://huggingface.co/google/gemma-2b-it) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3035
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.9983 | 1.0 | 28 | 2.3757 |
| 2.1934 | 2.0 | 56 | 2.3035 |
| 1.6321 | 3.0 | 84 | 2.4507 |
| 1.1753 | 4.0 | 112 | 2.6984 |
| 0.8011 | 5.0 | 140 | 2.9980 |
| 0.5162 | 6.0 | 168 | 3.2588 |
| 0.304 | 7.0 | 196 | 3.7305 |
| 0.1822 | 8.0 | 224 | 4.0858 |
| 0.1148 | 9.0 | 252 | 4.3105 |
| 0.0855 | 10.0 | 280 | 4.5450 |
### Framework versions
- PEFT 0.11.1
- Transformers 4.41.2
- Pytorch 2.1.2
- Datasets 2.19.2
- Tokenizers 0.19.1 |
jamemcd/output | jamemcd | "2024-06-20T10:40:32Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-20T10:40:32Z" | Entry not found |
BurakKarakurt/llama-3-8b-Instruct-bnb-4bit-unsloth.Q4_K_M.gguf | BurakKarakurt | "2024-06-20T10:41:33Z" | 0 | 1 | null | [
"license:llama3",
"region:us"
] | null | "2024-06-20T10:41:33Z" | ---
license: llama3
---
|
basakdemirok/bert-base-multilingual-cased-off_detect_v01_seed42 | basakdemirok | "2024-06-20T11:23:12Z" | 0 | 0 | transformers | [
"transformers",
"tf",
"tensorboard",
"bert",
"text-classification",
"generated_from_keras_callback",
"base_model:google-bert/bert-base-multilingual-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-06-20T10:41:47Z" | ---
license: apache-2.0
base_model: google-bert/bert-base-multilingual-cased
tags:
- generated_from_keras_callback
model-index:
- name: basakdemirok/bert-base-multilingual-cased-off_detect_v01_seed42
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# basakdemirok/bert-base-multilingual-cased-off_detect_v01_seed42
This model is a fine-tuned version of [google-bert/bert-base-multilingual-cased](https://huggingface.co/google-bert/bert-base-multilingual-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0510
- Validation Loss: 0.5725
- Train F1: 0.5959
- Epoch: 3
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 14256, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train F1 | Epoch |
|:----------:|:---------------:|:--------:|:-----:|
| 0.3607 | 0.3331 | 0.5487 | 0 |
| 0.2104 | 0.3987 | 0.5794 | 1 |
| 0.1029 | 0.5215 | 0.5816 | 2 |
| 0.0510 | 0.5725 | 0.5959 | 3 |
### Framework versions
- Transformers 4.41.2
- TensorFlow 2.13.1
- Datasets 2.4.0
- Tokenizers 0.19.1
|
pavan01729/Ai_doctor_llama3 | pavan01729 | "2024-06-20T10:58:10Z" | 0 | 0 | null | [
"safetensors",
"license:mit",
"region:us"
] | null | "2024-06-20T10:44:47Z" | ---
license: mit
---
llama3 fine tuned on : Shekswess/medical_llama3_instruct_dataset_short dataset.
100 steps training |
danielkosyra/polynomial_1450_7e-4_32b_w0.2 | danielkosyra | "2024-06-20T10:45:39Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:gpt2",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-06-20T10:45:20Z" | ---
license: mit
base_model: gpt2
tags:
- generated_from_trainer
model-index:
- name: polynomial_1450_7e-4_32b_w0.2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# polynomial_1450_7e-4_32b_w0.2
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8711
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0007
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 10
- total_train_batch_size: 320
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: polynomial
- lr_scheduler_warmup_steps: 250
- training_steps: 1450
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 9.0501 | 0.2058 | 50 | 7.2516 |
| 6.6334 | 0.4117 | 100 | 6.1191 |
| 5.8403 | 0.6175 | 150 | 5.5171 |
| 5.347 | 0.8234 | 200 | 5.0809 |
| 4.9621 | 1.0292 | 250 | 4.7655 |
| 4.5909 | 1.2351 | 300 | 4.4418 |
| 4.3142 | 1.4409 | 350 | 4.1684 |
| 4.0577 | 1.6468 | 400 | 3.8857 |
| 3.7934 | 1.8526 | 450 | 3.6317 |
| 3.5603 | 2.0585 | 500 | 3.4786 |
| 3.3743 | 2.2643 | 550 | 3.3722 |
| 3.3003 | 2.4702 | 600 | 3.2932 |
| 3.2338 | 2.6760 | 650 | 3.2353 |
| 3.1788 | 2.8818 | 700 | 3.1763 |
| 3.0774 | 3.0877 | 750 | 3.1289 |
| 2.9735 | 3.2935 | 800 | 3.0953 |
| 2.9351 | 3.4994 | 850 | 3.0626 |
| 2.9367 | 3.7052 | 900 | 3.0310 |
| 2.9088 | 3.9111 | 950 | 3.0032 |
| 2.7944 | 4.1169 | 1000 | 2.9830 |
| 2.7402 | 4.3228 | 1050 | 2.9669 |
| 2.7293 | 4.5286 | 1100 | 2.9475 |
| 2.7184 | 4.7345 | 1150 | 2.9275 |
| 2.7029 | 4.9403 | 1200 | 2.9098 |
| 2.6065 | 5.1462 | 1250 | 2.9024 |
| 2.5699 | 5.3520 | 1300 | 2.8938 |
| 2.5511 | 5.5578 | 1350 | 2.8836 |
| 2.5503 | 5.7637 | 1400 | 2.8756 |
| 2.5435 | 5.9695 | 1450 | 2.8711 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
saltdurian/solartest1 | saltdurian | "2024-06-20T10:48:47Z" | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | "2024-06-20T10:48:47Z" | ---
license: mit
---
|
mdiamore/intern_math_redo2 | mdiamore | "2024-06-21T00:09:22Z" | 0 | 0 | null | [
"tensorboard",
"safetensors",
"region:us"
] | null | "2024-06-20T10:50:16Z" | Entry not found |
CezarFY/whisper-tiny-cfy | CezarFY | "2024-06-20T17:35:51Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:PolyAI/minds14",
"base_model:openai/whisper-tiny",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-06-20T10:52:14Z" | ---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_trainer
datasets:
- PolyAI/minds14
metrics:
- wer
model-index:
- name: whisper-tiny-cfy
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: PolyAI/minds14
type: PolyAI/minds14
config: en-US
split: train[451:]
args: en-US
metrics:
- name: Wer
type: wer
value: 0.3184257602862254
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-tiny-cfy
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the PolyAI/minds14 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6293
- Wer Ortho: 0.3192
- Wer: 0.3184
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 500
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-------:|:----:|:---------------:|:---------:|:------:|
| 0.0011 | 17.2414 | 500 | 0.6293 | 0.3192 | 0.3184 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.1.2
- Datasets 2.19.2
- Tokenizers 0.19.1
|
heesun1/difu_model_test | heesun1 | "2024-06-20T10:59:57Z" | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"diffusers:StableDiffusionInstructPix2PixPipeline",
"region:us"
] | null | "2024-06-20T10:52:38Z" | Entry not found |
jjdp8rjj/jjdp8rjj | jjdp8rjj | "2024-06-20T10:56:32Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-20T10:56:32Z" | Entry not found |
ShapeKapseln33/SummerKeto22 | ShapeKapseln33 | "2024-06-20T11:01:02Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-20T10:57:15Z" | Summer Keto + ACV Gummies France -Uni : Lβajustement et la perte de poids sont devenus populaires rΓ©cemment. Les gens rΓ©flΓ©chissent souvent beaucoup Γ leur apparence et Γ leur comportement. Les mΓ©dias sociaux sont la principale raison pour laquelle tant dβindividus rΓ©ussissent dans le monde moderne, car ils sous-tendent tout. Actuellement, plus de 25 % des personnes s'intΓ©ressent au vlogging, et tout le monde souhaite se tenir au courant des derniΓ¨res modes, visiter des endroits exotiques et apprendre de nouvelles choses en gΓ©nΓ©ral.
**[Cliquez ici pour acheter maintenant sur le site officiel de Summer Keto ACV Gummies](https://adtocart.xyz/summer-keto-fr)**
Aujourdβhui, vous trouverez de nombreux produits de perte de poids sur les marchΓ©s en ligne qui promettent faussement de vous donner une silhouette mince en quelques semaines. Les clients qui sont piΓ©gΓ©s dans ces produits contrefaits peuvent obtenir des rΓ©sultats nΓ©gatifs sur leur corps. Ils ne sont pas sans danger pour une utilisation Γ long terme en raison de la prΓ©sence d'ingrΓ©dients et d'arΓ΄mes artificiels.
Maintenant, vous pouvez essayer Summer Keto ACV Gummies au Royaume-Uni pour rΓ©duire le poids supplΓ©mentaire du corps. Ces bonbons gΓ©lifiΓ©s peuvent aider Γ perdre les kilos en trop du corps en 4 Γ 5 semaines et Γ affiner votre silhouette. Ce blog dΓ©crit en dΓ©tail Summer Keto ACV Gummies 250 mg avec ses ingrΓ©dients, sa composition, sa formule, son fonctionnement, ses Γ©tudes mΓ©dicales, ses avantages et sa posologie.
##Summer Keto ACV Gummies en bref
Summer Keto + ACV Gummies 250 mg sont des gummies naturels de perte de poids composΓ©s d'ingrΓ©dients biologiques et d'Γ©lΓ©ments naturels. Ils peuvent aider Γ Γ©liminer les graisses tenaces du corps et Γ donner une silhouette Γ©lancΓ©e en quelques semaines. En dehors de cela, le produit peut Γ©galement brΓ»ler des graisses pour produire de lβΓ©nergie au lieu de glucides. Ce produit naturel de perte de poids peut amΓ©liorer la santΓ© mentale et amΓ©liorer la concentration mentale au cours de certaines semaines.
##IngrΓ©dients clΓ©s des bonbons gΓ©lifiΓ©s Summer Keto ACV
Les principaux ingrΓ©dients des Summer Keto ACV Gummies au Royaume-Uni comprennent des cΓ©tones avancΓ©es et des extraits de pommes. Ces bonbons gΓ©lifiΓ©s peuvent Γ©galement contenir des extraits dβherbes, de plantes et de fruits. En outre, le produit peut Γ©galement contenir dβautres nutriments et minΓ©raux importants en quantitΓ©s modΓ©rΓ©es. Chaque ingrΓ©dient utilisΓ© dans la fabrication de ce produit est testΓ© en laboratoire par des experts mΓ©dicaux et des scientifiques.
Ces bonbons gΓ©lifiΓ©s ne peuvent pas contenir d'arΓ΄mes, de colorants, de conservateurs artificiels, de parabΓ¨nes, de stimulants ou de charges. Ils sont exempts de produits chimiques, synthΓ©tiques ou gluten. Prendre ces bonbons quotidiennement est sans danger pour le corps et ne peut provoquer aucun effet secondaire tel que des maux de tΓͺte ou des migraines.
**[Cliquez ici pour acheter maintenant sur le site officiel de Summer Keto ACV Gummies](https://adtocart.xyz/summer-keto-fr)**
##Comment sont fabriquΓ©s les gummies ?
Chaque gummie est fabriquΓ© sous la supervision des meilleures Γ©quipes mΓ©dicales et de bons scientifiques. Ce produit est fabriquΓ© Γ lβaide des derniΓ¨res techniques et mΓ©thodes. Il est prΓ©parΓ© selon les meilleures pratiques et normes de lβindustrie.
Les Summer Keto + ACV Gummies sont recommandΓ©s par les meilleurs mΓ©decins et diΓ©tΓ©tistes du monde entier. Cβest un produit cliniquement prouvΓ©, efficace pour rΓ©duire lβobΓ©sitΓ© et lβexcΓ¨s de poids dans le corps.
##Que dit lβΓ©tude mΓ©dicale sur les Summer Keto ACV Gummies ?
La derniΓ¨re enquΓͺte sur la santΓ© montre qu'environ 26 % des adultes souffrent d'obΓ©sitΓ© et de prise de poids. Il indique Γ©galement quβenviron 38 % des adultes sont en surpoids. Ces chiffres augmentent chaque annΓ©e et l'obΓ©sitΓ© devient un grave problΓ¨me de santΓ© dans le monde. Une large population choisit les mΓ©thodes et produits traditionnels de perte de poids.
Selon la nouvelle enquΓͺte, les produits amaigrissants traditionnels sont nocifs pour le corps. Ils provoquent des effets secondaires dans lβorganisme tels que des migraines et des maux de tΓͺte.
Un nouveau produit Summer Keto ACV Gummies est composΓ© dβingrΓ©dients naturels. Ces bonbons gΓ©lifiΓ©s sont utilisΓ©s par de nombreux clients pour rΓ©duire l'obΓ©sitΓ©. La plupart des clients qui utilisent ce produit quotidiennement obtiennent une belle silhouette en quelques semaines. On dit que les bonbons aident Γ faire fondre les graisses tenaces dans le corps en quelques semaines.
De nombreuses personnes obtiennent des niveaux dβΓ©nergie plus Γ©levΓ©s aprΓ¨s avoir pris une dose quotidienne de ces bonbons gΓ©lifiΓ©s. Ce produit naturel aide Γ brΓ»ler les graisses au lieu des glucides et donne Γ votre silhouette une apparence attrayante. De plus, le produit a reΓ§u un maximum de critiques positives de la part des clients.
##Quel est le coΓ»t de vente des Summer Keto ACV Gummies au CanadaΒ ?
Le prix de 2 bouteilles de bonbons gΓ©lifiΓ©s Summer Keto ACV est de 49,95 Β£ par bouteille. Pour bΓ©nΓ©ficier de rΓ©ductions supplΓ©mentaires, vous pouvez essayer le pack de 4 bouteilles pour seulement 39,95 Β£ par bouteille. Le pack super Γ©conomique de bonbons gΓ©lifiΓ©s Summer Keto + ACV est disponible au prix de 39,95 Β£ par bouteille. Chaque commande que vous passez sur le site officiel de Summer Keto ACV Gummies est Γ©ligible Γ la livraison gratuite en .
##Comment sβopΓ¨re la perte de poids dans le corps ?
Summer Keto + ACV Gummies peut aider Γ brΓ»ler les graisses tenaces dans le ventre, les cuisses, le menton et le cou. Ils peuvent aider Γ perdre du poids jusquβΓ 5 livres. en seulement 2 semaines. En dehors de cela, les bonbons peuvent Γ©galement accΓ©lΓ©rer le processus mΓ©tabolique et brΓ»ler les graisses de tout le corps. Vous pouvez perdre jusqu'Γ 20 livres. de poids en 3 Γ 4 semaines.
Les bonbons gΓ©lifiΓ©s peuvent stabiliser lβappΓ©tit et rΓ©duire la faim frΓ©quente et les fringales de minuit. Ils peuvent donner une sensation de satiΓ©tΓ© Γ votre ventre et ne pas vous donner faim pendant des heures. De plus, les bonbons gΓ©lifiΓ©s peuvent provoquer une transformation soudaine de votre corps en quelques semaines.
**[Cliquez ici pour acheter maintenant sur le site officiel de Summer Keto ACV Gummies](https://adtocart.xyz/summer-keto-fr)**
|
Jozaita/code-search-net-tokenizer | Jozaita | "2024-06-20T10:57:30Z" | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-06-20T10:57:29Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Eka-Korn/my_awesome_billsum_model | Eka-Korn | "2024-06-20T11:07:56Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:t5-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | "2024-06-20T10:57:44Z" | ---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: my_awesome_billsum_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_billsum_model
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4965
- Rouge1: 0.1462
- Rouge2: 0.0533
- Rougel: 0.121
- Rougelsum: 0.121
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 62 | 2.7872 | 0.1335 | 0.0418 | 0.1138 | 0.1137 | 19.0 |
| No log | 2.0 | 124 | 2.5750 | 0.143 | 0.0536 | 0.1199 | 0.1201 | 19.0 |
| No log | 3.0 | 186 | 2.5135 | 0.1437 | 0.0501 | 0.1186 | 0.1186 | 19.0 |
| No log | 4.0 | 248 | 2.4965 | 0.1462 | 0.0533 | 0.121 | 0.121 | 19.0 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
ASHreyash/MediGenie-llama3-8b-instruct | ASHreyash | "2024-06-20T11:10:49Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-06-20T10:57:48Z" | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
DBangshu/Base_gemma_e5_3_1 | DBangshu | "2024-06-20T11:01:00Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-06-20T10:58:26Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
sataayu/molt5-augmented-default-1-large-smiles2caption | sataayu | "2024-06-20T14:35:26Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | "2024-06-20T10:59:17Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
jamesohe/casJoinUs-Llama3-8B-Epo100TextG1-R1e5-adapter | jamesohe | "2024-06-20T10:59:59Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-06-20T10:59:22Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
varun-v-rao/bart-large-bn-adapter-3.17M-squad-model2 | varun-v-rao | "2024-06-20T13:55:34Z" | 0 | 0 | null | [
"tensorboard",
"generated_from_trainer",
"dataset:varun-v-rao/squad",
"base_model:facebook/bart-large",
"license:apache-2.0",
"region:us"
] | null | "2024-06-20T11:03:55Z" | ---
license: apache-2.0
base_model: facebook/bart-large
tags:
- generated_from_trainer
datasets:
- varun-v-rao/squad
model-index:
- name: bart-large-bn-adapter-3.17M-squad-model2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-bn-adapter-3.17M-squad-model2
This model is a fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 24
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
joecheriross/whisper-small-hi | joecheriross | "2024-06-20T11:05:36Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"hi",
"dataset:mozilla-foundation/common_voice_11_0",
"base_model:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-06-20T11:04:18Z" | ---
language:
- hi
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
model-index:
- name: Whisper-tiny joe v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper-tiny joe v1
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the Common Voice 11.0 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
gitfreder/brain-tumor-classifier | gitfreder | "2024-06-20T11:23:21Z" | 0 | 0 | null | [
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"image-classification",
"en",
"license:mit",
"region:us"
] | image-classification | "2024-06-20T11:06:33Z" | ---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
license: mit
language:
- en
metrics:
- accuracy
pipeline_tag: image-classification
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
CNN Model to clasifying brain tumor
- Library: [More Information Needed]
- Docs: [More Information Needed]
- Dataset [Brain Tumor MRI](https://www.kaggle.com/datasets/masoudnickparvar/brain-tumor-mri-dataset)
- Test loss: 0.0212
- Test accuracy: 0.9657 |
jamesohe/casJoinUs-Gemma-V2-7B-ep100-adapter | jamesohe | "2024-06-20T11:08:55Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-06-20T11:08:23Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
jhoppanne/Dogs-Breed-Image-Classification-V0 | jhoppanne | "2024-06-21T14:36:06Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"resnet",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/resnet-50",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2024-06-20T11:11:16Z" | ---
license: apache-2.0
base_model: microsoft/resnet-50
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: Dogs-Breed-Image-Classification-V0
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.7444120505344995
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Dogs-Breed-Image-Classification-V0
This model is a fine-tuned version of [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8210
- Accuracy: 0.7444
## Model description
This model was trained using dataset from [Kaggle - Standford dogs dataset](https://www.kaggle.com/datasets/jessicali9530/stanford-dogs-dataset)
Quotes from the website:
The Stanford Dogs dataset contains images of 120 breeds of dogs from around the world. This dataset has been built using images and annotation from ImageNet for the task of fine-grained image categorization. It was originally collected for fine-grain image categorization, a challenging problem as certain dog breeds have near identical features or differ in colour and age.
citation:
Aditya Khosla, Nityananda Jayadevaprakash, Bangpeng Yao and Li Fei-Fei. Novel dataset for Fine-Grained Image Categorization. First Workshop on Fine-Grained Visual Categorization (FGVC), IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2011. [pdf] [poster] [BibTex]
Secondary:
J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li and L. Fei-Fei, ImageNet: A Large-Scale Hierarchical Image Database. IEEE Computer Vision and Pattern Recognition (CVPR), 2009. [pdf] [BibTex]
## Intended uses & limitations
This model is fined tune solely for classifiying 120 species of dogs.
## Training and evaluation data
75% training data, 25% testing data.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 13.4902 | 1.0 | 515 | 4.7822 | 0.0104 |
| 4.7159 | 2.0 | 1030 | 4.6822 | 0.0323 |
| 4.6143 | 3.0 | 1545 | 4.5940 | 0.0554 |
| 4.4855 | 4.0 | 2060 | 4.5027 | 0.0935 |
| 4.36 | 5.0 | 2575 | 4.3961 | 0.1239 |
| 4.2198 | 6.0 | 3090 | 4.3112 | 0.1528 |
| 4.0882 | 7.0 | 3605 | 4.1669 | 0.1747 |
| 3.9314 | 8.0 | 4120 | 4.0775 | 0.2021 |
| 3.7863 | 9.0 | 4635 | 3.9487 | 0.2310 |
| 3.6511 | 10.0 | 5150 | 3.9028 | 0.2466 |
| 3.5168 | 11.0 | 5665 | 3.8635 | 0.2626 |
| 3.3999 | 12.0 | 6180 | 3.7550 | 0.2767 |
| 3.3037 | 13.0 | 6695 | 3.6973 | 0.2884 |
| 3.1613 | 14.0 | 7210 | 3.6315 | 0.3037 |
| 3.0754 | 15.0 | 7725 | 3.4839 | 0.3188 |
| 2.9441 | 16.0 | 8240 | 3.4406 | 0.3302 |
| 2.8579 | 17.0 | 8755 | 3.3528 | 0.3406 |
| 2.7531 | 18.0 | 9270 | 3.3132 | 0.3472 |
| 2.6477 | 19.0 | 9785 | 3.2736 | 0.3567 |
| 2.5422 | 20.0 | 10300 | 3.1950 | 0.3756 |
| 2.4629 | 21.0 | 10815 | 3.1174 | 0.4004 |
| 2.3735 | 22.0 | 11330 | 2.9916 | 0.4225 |
| 2.2436 | 23.0 | 11845 | 2.9205 | 0.4509 |
| 2.1578 | 24.0 | 12360 | 2.9197 | 0.4689 |
| 2.0671 | 25.0 | 12875 | 2.8196 | 0.4866 |
| 1.9902 | 26.0 | 13390 | 2.7117 | 0.4961 |
| 1.8737 | 27.0 | 13905 | 2.7129 | 0.5078 |
| 1.7945 | 28.0 | 14420 | 2.6654 | 0.5143 |
| 1.7092 | 29.0 | 14935 | 2.6273 | 0.5301 |
| 1.6228 | 30.0 | 15450 | 2.5407 | 0.5454 |
| 1.5744 | 31.0 | 15965 | 2.5412 | 0.5559 |
| 1.4761 | 32.0 | 16480 | 2.4658 | 0.5658 |
| 1.4084 | 33.0 | 16995 | 2.4247 | 0.5673 |
| 1.2624 | 34.0 | 17510 | 2.3766 | 0.5758 |
| 1.2066 | 35.0 | 18025 | 2.2879 | 0.5843 |
| 1.124 | 36.0 | 18540 | 2.2039 | 0.5872 |
| 1.074 | 37.0 | 19055 | 2.2469 | 0.5965 |
| 0.9937 | 38.0 | 19570 | 2.1575 | 0.6011 |
| 0.9418 | 39.0 | 20085 | 2.0854 | 0.6122 |
| 0.8812 | 40.0 | 20600 | 1.9991 | 0.6254 |
| 0.819 | 41.0 | 21115 | 2.0161 | 0.6312 |
| 0.771 | 42.0 | 21630 | 1.9253 | 0.6375 |
| 0.7128 | 43.0 | 22145 | 1.9412 | 0.6390 |
| 0.6434 | 44.0 | 22660 | 1.8463 | 0.6509 |
| 0.6138 | 45.0 | 23175 | 1.8163 | 0.6650 |
| 0.5325 | 46.0 | 23690 | 1.7881 | 0.6710 |
| 0.498 | 47.0 | 24205 | 1.7526 | 0.6744 |
| 0.4565 | 48.0 | 24720 | 1.7155 | 0.6859 |
| 0.4109 | 49.0 | 25235 | 1.6874 | 0.6946 |
| 0.3681 | 50.0 | 25750 | 1.7386 | 0.6997 |
| 0.3306 | 51.0 | 26265 | 1.6578 | 0.7104 |
| 0.2913 | 52.0 | 26780 | 1.6641 | 0.7104 |
| 0.2598 | 53.0 | 27295 | 1.6823 | 0.7162 |
| 0.2311 | 54.0 | 27810 | 1.6835 | 0.7157 |
| 0.2115 | 55.0 | 28325 | 1.6581 | 0.7206 |
| 0.1843 | 56.0 | 28840 | 1.6286 | 0.7274 |
| 0.1668 | 57.0 | 29355 | 1.6358 | 0.7225 |
| 0.1483 | 58.0 | 29870 | 1.6422 | 0.7250 |
| 0.132 | 59.0 | 30385 | 1.6618 | 0.7284 |
| 0.1164 | 60.0 | 30900 | 1.6894 | 0.7262 |
| 0.1043 | 61.0 | 31415 | 1.6923 | 0.7276 |
| 0.0937 | 62.0 | 31930 | 1.6627 | 0.7323 |
| 0.0826 | 63.0 | 32445 | 1.6280 | 0.7342 |
| 0.0743 | 64.0 | 32960 | 1.6204 | 0.7366 |
| 0.0638 | 65.0 | 33475 | 1.6890 | 0.7383 |
| 0.0603 | 66.0 | 33990 | 1.6967 | 0.7335 |
| 0.0491 | 67.0 | 34505 | 1.6975 | 0.7306 |
| 0.0459 | 68.0 | 35020 | 1.7242 | 0.7337 |
| 0.0416 | 69.0 | 35535 | 1.7019 | 0.7374 |
| 0.0382 | 70.0 | 36050 | 1.7098 | 0.7381 |
| 0.0378 | 71.0 | 36565 | 1.7188 | 0.7383 |
| 0.0326 | 72.0 | 37080 | 1.8212 | 0.7376 |
| 0.0323 | 73.0 | 37595 | 1.7965 | 0.7393 |
| 0.0299 | 74.0 | 38110 | 1.7934 | 0.7301 |
| 0.0259 | 75.0 | 38625 | 1.7799 | 0.7335 |
| 0.0276 | 76.0 | 39140 | 1.8456 | 0.7301 |
| 0.0257 | 77.0 | 39655 | 1.8551 | 0.7391 |
| 0.0234 | 78.0 | 40170 | 1.7780 | 0.7391 |
| 0.0222 | 79.0 | 40685 | 1.8216 | 0.7362 |
| 0.0195 | 80.0 | 41200 | 1.8333 | 0.7352 |
| 0.0214 | 81.0 | 41715 | 1.8526 | 0.7430 |
| 0.0207 | 82.0 | 42230 | 1.8581 | 0.7364 |
| 0.0171 | 83.0 | 42745 | 1.8329 | 0.7393 |
| 0.0175 | 84.0 | 43260 | 1.8841 | 0.7396 |
| 0.0165 | 85.0 | 43775 | 1.8381 | 0.7345 |
| 0.0152 | 86.0 | 44290 | 1.8192 | 0.7379 |
| 0.0168 | 87.0 | 44805 | 1.8538 | 0.7388 |
| 0.0158 | 88.0 | 45320 | 1.8390 | 0.7371 |
| 0.0181 | 89.0 | 45835 | 1.8555 | 0.7374 |
| 0.0142 | 90.0 | 46350 | 1.7987 | 0.7352 |
| 0.0147 | 91.0 | 46865 | 1.8446 | 0.7427 |
| 0.0142 | 92.0 | 47380 | 1.8210 | 0.7444 |
| 0.0124 | 93.0 | 47895 | 1.8233 | 0.7405 |
| 0.0128 | 94.0 | 48410 | 1.8517 | 0.7393 |
| 0.0135 | 95.0 | 48925 | 1.8408 | 0.7413 |
| 0.0122 | 96.0 | 49440 | 1.8153 | 0.7396 |
| 0.0141 | 97.0 | 49955 | 1.8645 | 0.7432 |
| 0.0121 | 98.0 | 50470 | 1.8526 | 0.7430 |
| 0.0124 | 99.0 | 50985 | 1.8693 | 0.7388 |
| 0.0113 | 100.0 | 51500 | 1.8051 | 0.7427 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.3.0
- Datasets 2.15.0
- Tokenizers 0.15.1
|
MohamedAhmedAE/Tiny-Gemma-Medical-Train | MohamedAhmedAE | "2024-06-20T11:38:23Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-06-20T11:12:44Z" | Entry not found |
DBangshu/gemma_e5_5_4 | DBangshu | "2024-06-20T11:15:26Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-06-20T11:12:46Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
RanchiZhao/MiniCPM-2B-sft-fp32 | RanchiZhao | "2024-06-20T11:14:33Z" | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | "2024-06-20T11:14:33Z" | ---
license: mit
---
|
balaramas/asr_bangla | balaramas | "2024-06-20T11:23:34Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-06-20T11:15:14Z" | ---
license: apache-2.0
---
|
Ramikan-BR/TiamaPY-LORA-v31 | Ramikan-BR | "2024-06-20T11:16:51Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/tinyllama-chat-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-20T11:15:48Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/tinyllama-chat-bnb-4bit
---
# Uploaded model
- **Developed by:** Ramikan-BR
- **License:** apache-2.0
- **Finetuned from model :** unsloth/tinyllama-chat-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
GN20403/distilbert | GN20403 | "2024-06-20T11:17:39Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-20T11:17:39Z" | Entry not found |
Chahat7874/wav2vec2-large-xls-r-300m-hindi_telugu-colab | Chahat7874 | "2024-06-20T13:51:23Z" | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-06-20T11:19:09Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
bezzam/diffusercam-mirflickr-unet4M-unrolled-admm5-unet4M | bezzam | "2024-06-20T11:21:07Z" | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | "2024-06-20T11:20:38Z" | ---
license: mit
---
|
karthikmit/openai-whisper-medium-LORA | karthikmit | "2024-06-20T15:30:44Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-06-20T11:21:07Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
abdullah1010/merged_model_V2 | abdullah1010 | "2024-06-20T11:29:54Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-06-20T11:22:06Z" | Entry not found |
bezzam/diffusercam-mirflickr-mwdn-8M | bezzam | "2024-06-20T11:23:11Z" | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | "2024-06-20T11:22:46Z" | ---
license: mit
---
|
hcy5561/distilbert-base-uncased-finetuned-ner | hcy5561 | "2024-06-20T11:27:22Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2024-06-20T11:23:13Z" | Entry not found |
majoh837/openchat_viz | majoh837 | "2024-06-20T11:23:29Z" | 0 | 0 | transformers | [
"transformers",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:openchat/openchat-3.5-0106",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-20T11:23:28Z" | ---
base_model: openchat/openchat-3.5-0106
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
---
# Uploaded model
- **Developed by:** majoh837
- **License:** apache-2.0
- **Finetuned from model :** openchat/openchat-3.5-0106
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mogmyij/Llama2-7b-BoolQ-full-LoRA | mogmyij | "2024-06-20T11:24:54Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | "2024-06-20T11:24:46Z" | ---
license: llama2
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: meta-llama/Llama-2-7b-hf
datasets:
- generator
model-index:
- name: Llama2-7b-BoolQ-full-LoRA
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama2-7b-BoolQ-full-LoRA
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1372
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.4039 | 0.9787 | 23 | 1.3046 |
| 1.2571 | 2.0 | 47 | 1.1492 |
| 1.121 | 2.9362 | 69 | 1.1372 |
### Framework versions
- PEFT 0.11.1
- Transformers 4.41.1
- Pytorch 2.3.0
- Datasets 2.19.1
- Tokenizers 0.19.1 |
Benjoe19/Test | Benjoe19 | "2024-06-20T11:29:56Z" | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | "2024-06-20T11:29:56Z" | ---
license: mit
---
|
qazimbhat1/crystal-chat-general | qazimbhat1 | "2024-06-20T11:33:19Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"llava_crystal",
"text-generation",
"custom_code",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-06-20T11:32:30Z" | Entry not found |
Goshgosh/Anyline | Goshgosh | "2024-06-20T11:51:09Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-20T11:33:32Z" | Entry not found |
valerielucro/mistral_gsm8k_sft_cot | valerielucro | "2024-06-20T11:35:47Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-06-20T11:35:35Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
gsar78/tokenizer_BPE_en_el | gsar78 | "2024-06-20T11:36:51Z" | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-06-20T11:36:51Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
NotAiLOL/test_llama_3_8b_step_60 | NotAiLOL | "2024-06-20T11:48:37Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"text-generation-inference",
"transformers",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"region:us"
] | null | "2024-06-20T11:36:52Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-bnb-4bit
library_name: peft
---
# Uploaded model
- **Developed by:** NotAiLOL
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |
Boostaro155/PharmaflexRX455 | Boostaro155 | "2024-06-20T11:37:22Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-20T11:36:56Z" | # Pharmaflex RX 리뷰 κ²½ν β Pharma Flex RX ν¨λ₯, μ±λΆ 곡μ κ°κ²©, ꡬ맀
Pharmaflex RX 리뷰 κ²½ν - κ°μ₯ μΈκΈ° μλ μμ 보좩μ Pharma Flex Rxλ κ΄μ 건κ°μ μ μ§νκ³ μ¦μ§νκΈ° μν κ²μ
λλ€. μ μ‘°μ¬λ νλ¬κ° μλ λͺ¨λ μ²μ° μ±λΆμ νΉμ§μΌλ‘ νλ€κ³ λ§ν©λλ€.
## **[Pharmaflex RX 곡μ μΉμ¬μ΄νΈμμ μ§κΈ ꡬ맀νλ €λ©΄ μ¬κΈ°λ₯Ό ν΄λ¦νμΈμ.](https://adtocart.xyz/pharmaflex-rx)**
## Pharma Flex RXλ μ΄λ»κ² μλνλμ?
μ΄λ―Έ μΈκΈνλ―μ΄ Pharma Flex RXλ λ§μ± κ΄μ ν΅μ μ₯κΈ°μ μΌλ‘ μΉλ£νλ λ° μ¬μ©ν μ μλ μ΅κ³ μ μ νμ
λλ€. μ΄ μ루μ
μ μ°κ³¨μ μμ°μ μΈ μ€νμ μ΄μ§νμ¬ κ΄μ λΆμ’
κ³Ό μκ·Ήμ μ€μ΄λ λ° λμμ΄ λ©λλ€. Pharma Flex RXλ λμ΄κ° λ€μλ‘ λΌμ λ°λκ° μκΈ° μμνκΈ° λλ¬Έμ λ λμ 건κ°κ³Ό μ°λΉμ μν΄ νκ³Ό νλ ₯μ ν볡νλ λ° λμμ΄ λ©λλ€.
κ²½νμ μΌλ‘ λ·λ°μΉ¨λλ μ΄ κ³΅μμ μΈν¬ μμ€μμ κ°μ₯ ν¨κ³Όμ μΌλ‘ μμ©νμ¬ μμ νκ³ μ μ©νλ©° μ€λ μ§μλλ κ²°κ³Όλ₯Ό μ 곡ν μ μμ΅λλ€. PharmaFlexλ μ°κ³¨ μμΆ© λ° κ΄μ μ€νμ μ΅μ ννλ λμμ λ λμ μ 체 κΈ°λ₯κ³Ό μμ§μμ μ΄μ§ν©λλ€. Pharma Flex RXλ μ 체μ μμ°μ μΈ νλ₯λ₯Ό μ¦κ°μμΌ κ΄μ μ μμμ 곡κΈνκ³ λΉνλ―Όμ 곡κΈνλ©° μ 체μ μ°λΉμ ν₯μμν΅λλ€.
## Pharma Flex RXμλ μ΄λ€ μ±λΆμ΄ ν¬ν¨λμ΄ μλμ?
Pharma Flex RXλ μμ νκ³ ν¨κ³Όμ μΈ μ루μ
μ μ 곡νκΈ° μν΄ μ κΈ°μ μΌλ‘ μ¬λ°°λ μ²μ° λ¬Όμ§λ‘λ§ κ΅¬μ±λμ΄ μμ΅λλ€. μ΄λ μ νμ λ§€μ° ν¨κ³Όμ μ΄κ³ μμ νκ² λ§λλ μμ μ€ νλμ
λλ€. μ΄λ¬ν κ΅¬μ± μμλ λ€μκ³Ό κ°μ΅λλ€.
μ΄ κ³΅μμ κ°μ₯ κ°λ ₯ν μ±λΆ μ€ νλλ μ¬ν© λΏλ¦¬ μΆμΆλ¬Όμ
λλ€. κ°ν©λΏλ¦¬μΆμΆλ¬Όμ΄λΌλ μ κΈ°λ¬Όμ§μ΄ μΌμ¦μΌλ‘ μΈν λ§μ±κ΄μ ν΅μ μ¦κ°μ μΌλ‘ μνμμΌμ€λλ€. κ°ν© λΏλ¦¬μ μμ© λ°©μμ κ΄μ μ νλνκ³ μΌμ¦μ μΌμΌν€λ μ ν΄ν μ¬μ΄ν μΉ΄μΈ λ¨λ°±μ§μ μΆμ μ λ°©μ§νλ κ²μ
λλ€. κ·Έ κ²°κ³Ό κ΄μ μΌμΌλ‘ μΈν λΆμ’
κ³Ό μΌμ¦μ κ°μμν€κ³ κ΄μ μΌ λ°λ³μ μ§μ°μν¨λ€.
μ¬λ¬ μμ μλ¬Όμ μ‘΄μ¬νλ μ²μ° μ κΈ° ν©μΈ λ©νΈμ€ν¬λλ©νμ μΌλ°μ μΌλ‘ μ 체μ λ§μ± ν΅μ¦κ³Ό λΆνΈν¨μ κ°μμν€λ κ²μΌλ‘ μλ €μ Έ μμ΅λλ€. λ©νΈμ€ν¬λλ©νμ λν μ 체μ μ μ°μ±μ λμ΄λ λ° λμμ΄ λ©λλ€.
μ’μ κ΄μ ν΅μ¦ μνλ₯Ό μ 곡νλ λ λ€λ₯Έ νμ± μ±λΆμ κΈλ£¨μ½μ¬λ―Ό ν©μ°μΌμ
λλ€. μ°κ³¨μ μΏ μ
λκ³Ό μ€νμ μ΄μ§νκ³ κ΄μ κΈ°λ₯μ ν₯μμν€λ μν μ ν©λλ€. κΈλ£¨μ½μ¬λ―Ό ν©μ°μΌμ κΈ°λ₯μ± κ΄μ ν΄νμ μλ°©ν©λλ€.
λ©μ΄μ ν΄λ¦¬λ(Mayo Clinic) μΉμ¬μ΄νΈμ λ°λ₯΄λ©΄, μ΄ νμ μ±λΆμ 보좩μ λ‘ λ³΅μ©νλ©΄ μ°κ³¨ λ³μ±μ λ¦μΆμ΄ κ΄μ λΆνΈν¨μ μ€μ΄λ λ° λμμ΄ λ©λλ€.
PharmaFlexμ μ£Όμ νμ± μ±λΆμ μΈνΈλ‘, μ΄λ μΌμ¦κ³Ό λ§μ± κ΄μ ν΅μ μ€μ΄λ λ° λμμ΄ λλ λ€μν νΉμ±μ κ°μ§κ³ μμ΅λλ€. μ¬κ°ν κ΄μ μμμ λν λ°©μ΄λ ₯μ κ°νν©λλ€.
νμΈμ νμ μ²μ° ννλ¬Όμ§μ λΈλ‘λ©λΌμΈμ΄λΌκ³ ν©λλ€. μ§ν΅μ μ νμΌμ¦μ νΉμ±μ΄ κ°λ ₯νκ² κ²°ν©λμ΄ μ₯κΈ°κ° λμ κΈμ± λ° λ§μ± ν΅μ¦μ μνν©λλ€.
## Pharma Flex RX νν
Pharma Flex RXλ μ£Όμ μ±λΆμ΄ λͺΈ μ 체λ₯Ό μΉμ νκ³ κ°μμ€λ¬μ΄ ν΅μ¦μ μ€λ μ§μμν€λ κ²½μ΄λ‘μμ μ 곡νκΈ° λλ¬Έμ κ΄μ ν΅μ¦κ³Ό λΆμμ μΉλ£νλ λ° λλΌμ΄ ν¨κ³Όκ° μμ΅λλ€. λ€μμ μ΄μ μ μμ½ν κ²μ
λλ€. μ£Όμ κΉκ² μ½μ΄λ³΄μκΈ° λ°λλλ€.
μ¬ν κ΄μ ν΅μ¦ μνμ λμμ΄ λ©λλ€.
### κ·Όμ‘ μΉμ μ λμ
λΆμ, κΈ΄μ₯, μ λ§κ°κ³Ό κ°μ μ μ μ λΆμμ μ κ°μμν΅λλ€.
## **[Pharmaflex RX 곡μ μΉμ¬μ΄νΈμμ μ§κΈ ꡬ맀νλ €λ©΄ μ¬κΈ°λ₯Ό ν΄λ¦νμΈμ.](https://adtocart.xyz/pharmaflex-rx)** |
jihyunnn/repo_name | jihyunnn | "2024-06-20T11:37:00Z" | 0 | 0 | null | [
"license:unknown",
"region:us"
] | null | "2024-06-20T11:37:00Z" | ---
license: unknown
---
|
codingninja/w2v2-punjabi-asr-v2 | codingninja | "2024-06-20T11:37:07Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-20T11:37:07Z" | Entry not found |
rebaiAhmad/q-FrozenLake-v1-4x4-noSlippery | rebaiAhmad | "2024-06-20T11:37:10Z" | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2024-06-20T11:37:08Z" | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="rebaiAhmad/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
varun-v-rao/gpt2-large-lora-2.95M-squad-model2 | varun-v-rao | "2024-06-20T15:17:49Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"question-answering",
"generated_from_trainer",
"dataset:varun-v-rao/squad",
"base_model:openai-community/gpt2-large",
"license:mit",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | question-answering | "2024-06-20T11:37:58Z" | ---
license: mit
base_model: openai-community/gpt2-large
tags:
- generated_from_trainer
datasets:
- varun-v-rao/squad
model-index:
- name: gpt2-large-lora-2.95M-squad-model2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-large-lora-2.95M-squad-model2
This model is a fine-tuned version of [openai-community/gpt2-large](https://huggingface.co/openai-community/gpt2-large) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 36
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
YahiaABbas/ddpm-butterflies-128 | YahiaABbas | "2024-06-20T11:38:21Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-20T11:38:21Z" | Entry not found |
xcf-t/q-FrozenLake-v1-4x4-noSlippery | xcf-t | "2024-06-20T11:39:18Z" | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2024-06-20T11:39:16Z" | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="xcf-t/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
krvstxne/krvstxne | krvstxne | "2024-06-20T11:39:43Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-20T11:39:43Z" | Entry not found |