modelId
stringlengths 5
122
| author
stringlengths 2
42
| last_modified
unknown | downloads
int64 0
738M
| likes
int64 0
11k
| library_name
stringclasses 245
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 48
values | createdAt
unknown | card
stringlengths 1
901k
|
---|---|---|---|---|---|---|---|---|---|
fxmeng/PiSSA-Qwen2-72B-4bit-r128-5iter | fxmeng | "2024-06-14T10:55:17Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-06-14T07:01:29Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Rxplore/llava_model | Rxplore | "2024-06-14T07:02:23Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-14T07:02:23Z" | Entry not found |
fxmeng/PiSSA-Qwen2-72B-Instruct-4bit-r64-5iter | fxmeng | "2024-06-14T14:46:10Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-06-14T07:04:20Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
sunilghanchi/llama-3-8b-Instruct-bnb-4bit-linearloop | sunilghanchi | "2024-06-14T07:13:16Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-14T07:04:45Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
---
# Uploaded model
- **Developed by:** sunilghanchi
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
sophiayk/MDEBERTA_2e-06_16_0.1_0.01_10ep | sophiayk | "2024-06-14T07:09:28Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-06-14T07:08:54Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
SuMoss/dreamtobenlpsam_dpo_ckpt_5 | SuMoss | "2024-06-14T07:10:40Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:microsoft/phi-2",
"region:us"
] | null | "2024-06-14T07:09:22Z" | ---
library_name: peft
base_model: microsoft/phi-2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.11.2.dev0 |
Tippawan/seallm-oz | Tippawan | "2024-06-14T07:11:31Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma",
"trl",
"en",
"base_model:SeaLLMs/SeaLLM-7B-v2.5",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-14T07:11:18Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- gemma
- trl
base_model: SeaLLMs/SeaLLM-7B-v2.5
---
# Uploaded model
- **Developed by:** Tippawan
- **License:** apache-2.0
- **Finetuned from model :** SeaLLMs/SeaLLM-7B-v2.5
This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
LarryAIDraw/clorinde_pony | LarryAIDraw | "2024-06-14T07:14:25Z" | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2024-06-14T07:12:08Z" | ---
license: creativeml-openrail-m
---
https://civitai.com/models/196862/genshinxl-clorinde-genshin-impact |
Tippawan/seallm-oz-float16-for-VLLM | Tippawan | "2024-06-14T07:12:18Z" | 0 | 0 | transformers | [
"transformers",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-06-14T07:12:17Z" | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
dwb2023/llama38binstruct_summarize_v4 | dwb2023 | "2024-06-14T07:16:37Z" | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"llama",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:NousResearch/Meta-Llama-3-8B-Instruct",
"license:other",
"4-bit",
"bitsandbytes",
"region:us"
] | null | "2024-06-14T07:14:11Z" | ---
license: other
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: NousResearch/Meta-Llama-3-8B-Instruct
datasets:
- generator
model-index:
- name: llama38binstruct_summarize_v4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama38binstruct_summarize_v4
This model is a fine-tuned version of [NousResearch/Meta-Llama-3-8B-Instruct](https://huggingface.co/NousResearch/Meta-Llama-3-8B-Instruct) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4313
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_steps: 0.03
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.5211 | 1.25 | 25 | 1.4041 |
| 0.4473 | 2.5 | 50 | 1.3731 |
| 0.2109 | 3.75 | 75 | 1.3571 |
| 0.102 | 5.0 | 100 | 1.4313 |
### Framework versions
- PEFT 0.11.1
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1 |
YeBhoneLin10/chatbot | YeBhoneLin10 | "2024-06-14T07:17:05Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-14T07:17:05Z" | Entry not found |
QuantaScriptor/Nexus_AI | QuantaScriptor | "2024-06-14T11:03:28Z" | 0 | 0 | null | [
"Federated Learning",
"Homomorphic Encryption",
"Secure Data Processing",
"en",
"license:other",
"region:us"
] | null | "2024-06-14T07:17:54Z" | ---
license: other
license_name: commercial-license-agi-gpt-5-model
license_link: LICENSE
language:
- en
tags:
- Federated Learning
- Homomorphic Encryption
- Secure Data Processing
---
# 🎉 **Nexus AI** 🎉
Welcome to the **Nexus AI** repository! This project implements the Nexus AI model with modular neural networks, federated learning, and homomorphic encryption for secure data processing.
---
## 🌟 **Overview** 🌟
This repository contains:
- **Modular Neural Networks**: Dynamic architecture adjustments for enhanced performance.
- **Federated Learning**: Secure, scalable model deployment using federated learning frameworks.
- **Homomorphic Encryption**: Secure data processing with state-of-the-art encryption techniques.
---
## 📂 **File Structure** 📂
```plaintext
project_root/
├── config.py
├── main.py
├── model.py
├── federated.py
├── data.py
├── train.py
├── config.json
├── tokenizer_config.json
├── vocab.json
├── merges.txt
├── requirements.txt
├── setup.py
└── README.md
```
---
## 🛠️ **Setup and Installation** 🛠️
1. **Clone the repository**:
```bash
git clone https://github.com/Quantascriptor/Nexus_AI.git
cd Nexus_AI
```
2. **Install dependencies**:
```bash
pip install -r requirements.txt
```
3. **Run the main script**:
```bash
python main.py
```
---
## 📜 **Licensing** 📜
- **Non-Commercial Use**: Licensed under the Apache License 2.0.
- **Commercial Use**: Refer to the commercial license details provided separately.
For commercial licensing, please contact us:
- 📧 Email: [sales@quantascript.com](mailto:sales@quantascript.com)
- 📞 Phone: 650-440-7704
---
## 📈 **Training the Model** 📈
To fine-tune the Nexus AI model, use the `train.py` script:
```bash
python train.py --dataset path_to_your_dataset.txt
```
---
## 🔧 **Configuration** 🔧
Modify `config.py` for hyperparameter settings:
```python
# config.py
LEARNING_RATE = 0.01
BATCH_SIZE = 10
EPOCHS = 10
ENCRYPTION_KEY = 1.234
```
---
## 🚀 **Federated Learning** 🚀
Implement federated learning using `federated.py`:
```python
# Initialize global model
global_model = ModularNetwork([nn.Linear(128, 256), nn.ReLU(), nn.Linear(256, 10)])
clients = prepare_federated_data()
server = FederatedLearningServer(global_model)
federated_training(server, clients)
```
---
## 🔒 **Security Features** 🔒
Homomorphic encryption for secure data processing:
```python
# Encrypt data
encrypted_data = homomorphic_encrypt(data, ENCRYPTION_KEY)
```
---
## 📧 **Contact Us** 📧
For more information, please reach out to us:
- **Email**: [sales@quantascript.com](mailto:sales@quantascript.com)
- **Phone**: 650-440-7704
---
**Happy Coding!** 🚀
|
manbeast3b/ZZZZZZZZZZZtest32 | manbeast3b | "2024-06-14T07:20:41Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-06-14T07:18:31Z" | Entry not found |
czhhh/path_to_saved_model | czhhh | "2024-06-14T09:15:14Z" | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"text-to-image",
"dreambooth",
"diffusers-training",
"stable-diffusion",
"stable-diffusion-diffusers",
"base_model:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2024-06-14T07:20:15Z" | ---
license: creativeml-openrail-m
library_name: diffusers
tags:
- text-to-image
- dreambooth
- diffusers-training
- stable-diffusion
- stable-diffusion-diffusers
base_model: runwayml/stable-diffusion-v1-5
inference: true
instance_prompt: a photo of sks dog
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# DreamBooth - czhhh/path_to_saved_model
This is a dreambooth model derived from runwayml/stable-diffusion-v1-5. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
crazyming/test | crazyming | "2024-06-14T07:24:20Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-14T07:24:20Z" | Entry not found |
AlekseyElygin/mistral-7b-bnb-8bit-LORA | AlekseyElygin | "2024-06-14T07:29:37Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/mistral-7b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-14T07:29:15Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
base_model: unsloth/mistral-7b-bnb-4bit
---
# Uploaded model
- **Developed by:** AlekseyElygin
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Jkverma1/llm-server | Jkverma1 | "2024-06-14T07:29:23Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-14T07:29:23Z" | Entry not found |
bezzam/tapecam-mirflickr-trainable-inv-unet8M | bezzam | "2024-06-14T07:50:15Z" | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | "2024-06-14T07:30:20Z" | ---
license: mit
---
|
Bibekananda/bk_lora_model1 | Bibekananda | "2024-06-14T07:31:43Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-14T07:31:31Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** Bibekananda
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
sanwuge/roberta-factchecking | sanwuge | "2024-06-14T07:33:02Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-14T07:33:02Z" | Entry not found |
medallo/a2 | medallo | "2024-06-14T09:36:03Z" | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | "2024-06-14T07:33:47Z" | ---
license: mit
---
|
HealTether-Healthcare/llama3-8b-lora-finetuned-v1.1 | HealTether-Healthcare | "2024-06-14T07:37:44Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-14T07:37:24Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
---
# Uploaded model
- **Developed by:** HealTether-Healthcare
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
nginbm/korean | nginbm | "2024-06-14T07:39:36Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-14T07:38:45Z" | Entry not found |
swoos/llama-2-7b-unsloth-KoCoT-2000 | swoos | "2024-06-14T07:47:00Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"en",
"base_model:unsloth/llama-2-7b-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-06-14T07:41:40Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-2-7b-bnb-4bit
---
# Uploaded model
- **Developed by:** swoos
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-2-7b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Kraken777/mental-health-chat-bot | Kraken777 | "2024-06-14T07:41:44Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-14T07:41:44Z" | Entry not found |
chohtet/qwen2_7b_lora_test | chohtet | "2024-06-14T07:45:18Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"base_model:unsloth/Qwen2-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-14T07:45:09Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
base_model: unsloth/Qwen2-7B
---
# Uploaded model
- **Developed by:** chohtet
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2-7B
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
chohtet/qwen2_7b_4bit_test | chohtet | "2024-06-14T07:47:46Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/Qwen2-7B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-06-14T07:45:34Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- sft
base_model: unsloth/Qwen2-7B
---
# Uploaded model
- **Developed by:** chohtet
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2-7B
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
DavidLacour/distillgpt2untrained | DavidLacour | "2024-06-14T07:49:39Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-06-14T07:49:03Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
DavidLacour/distillgpt2DPOtrainedm2scitas | DavidLacour | "2024-06-14T07:50:13Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-06-14T07:49:54Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
HyperdustProtocol/HyperAoto-llama2-7b-965 | HyperdustProtocol | "2024-06-14T07:50:25Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-2-7b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-14T07:50:15Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-2-7b-bnb-4bit
---
# Uploaded model
- **Developed by:** HyperdustProtocol
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-2-7b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
AlopaxalgoWjd5480/c4ai-command-r-v01 | AlopaxalgoWjd5480 | "2024-06-14T07:50:50Z" | 0 | 0 | transformers | [
"transformers",
"cohere",
"text-generation",
"conversational",
"custom_code",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-06-14T07:50:34Z" | Entry not found |
DBangshu/GPT2_e7_6_3 | DBangshu | "2024-06-14T07:51:28Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-06-14T07:51:08Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ilseokoh/dummy | ilseokoh | "2024-06-14T07:56:18Z" | 0 | 0 | transformers | [
"transformers",
"tf",
"camembert",
"fill-mask",
"generated_from_keras_callback",
"base_model:camembert-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2024-06-14T07:54:16Z" | ---
license: mit
tags:
- generated_from_keras_callback
base_model: camembert-base
model-index:
- name: dummy
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# dummy
This model is a fine-tuned version of [camembert-base](https://huggingface.co/camembert-base) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.41.2
- TensorFlow 2.15.0
- Datasets 2.20.0
- Tokenizers 0.19.1
|
beyondkyleLee/lama3 | beyondkyleLee | "2024-06-14T07:55:27Z" | 0 | 0 | null | [
"license:llama3",
"region:us"
] | null | "2024-06-14T07:55:27Z" | ---
license: llama3
---
|
zhanjun/7b | zhanjun | "2024-06-14T08:27:43Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-14T07:56:11Z" | Entry not found |
QuyenHY/Lallva | QuyenHY | "2024-06-14T08:00:33Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-14T08:00:33Z" | Entry not found |
Dongchao/new_data | Dongchao | "2024-06-29T02:10:33Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-14T08:00:43Z" | Entry not found |
AlienKevin/ids_ccr | AlienKevin | "2024-06-18T23:46:08Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-14T08:02:15Z" | Entry not found |
0xfaskety/Qwen-Qwen1.5-7B-1718352134 | 0xfaskety | "2024-06-14T08:02:15Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-14T08:02:15Z" | Entry not found |
QuyenHY/Quyen | QuyenHY | "2024-06-14T08:03:58Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-14T08:03:58Z" | Entry not found |
QuyenHY/Q | QuyenHY | "2024-06-14T08:07:05Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-14T08:07:05Z" | Entry not found |
tqphu12421/demo_ml | tqphu12421 | "2024-06-14T08:08:41Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-14T08:08:41Z" | Entry not found |
kbrdek37/llama38binstruct_summarize | kbrdek37 | "2024-06-18T20:06:05Z" | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:NousResearch/Meta-Llama-3-8B-Instruct",
"license:other",
"region:us"
] | null | "2024-06-14T08:10:28Z" | ---
base_model: NousResearch/Meta-Llama-3-8B-Instruct
datasets:
- generator
library_name: peft
license: other
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: llama38binstruct_summarize
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama38binstruct_summarize
This model is a fine-tuned version of [NousResearch/Meta-Llama-3-8B-Instruct](https://huggingface.co/NousResearch/Meta-Llama-3-8B-Instruct) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7040
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_steps: 0.03
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.5571 | 1.25 | 25 | 1.2650 |
| 0.518 | 2.5 | 50 | 1.4966 |
| 0.2207 | 3.75 | 75 | 1.7554 |
| 0.1096 | 5.0 | 100 | 1.7040 |
### Framework versions
- PEFT 0.11.1
- Transformers 4.41.2
- Pytorch 2.2.1+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1 |
ShapeKapseln33/Bioxtrim333 | ShapeKapseln33 | "2024-06-14T08:15:30Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-14T08:11:53Z" | [neu] Bioxtrim Erfahrungen & Vorteile BioXtrim Gummies zur Gewichtsabnahme unterscheiden sich von herkömmlichen Nahrungsergänzungsmitteln zur Gewichtsabnahme und bieten eine praktische und köstliche Möglichkeit zur Unterstützung einer gesunden Gewichtskontrolle. Diese Gummies sind mit einer Mischung aus natürlichen Zutaten formuliert, die sorgfältig ausgewählt wurden, um die Fettverbrennung zu fördern, den Appetit zu unterdrücken und den Stoffwechsel anzukurbeln. Im Gegensatz zu starken Stimulanzien oder restriktiven Diäten bieten BioXtrim Gummies einen sanften und dennoch effektiven Ansatz zum Erreichen und Halten eines gesunden Gewichts.
**[Klicken Sie hier, um jetzt auf der offiziellen Website von Bioxtrim zu kaufen](https://slim-gummies-deutschland.de/bioxtrim-de)**
##Vorteile von Bioxtrim
Die Vorteile von Bioxtrim Gummies gegenüber anderen Schlankheitsprodukten sind vielschichtig. Sie werden nicht nur für ihre Wirksamkeit beim Abnehmen geschätzt, sondern auch für die Einfachheit ihrer Anwendung. Anwender schätzen folgende Vorteile:
Einfache Integration in den Alltag: Bioxtrim Gummies lassen sich ohne großen Aufwand einfach in den Alltag integrieren.
Positive Bewertungen: Die vielen positiven Erfahrungsberichte geben anderen potentiellen Anwendern die Zuversicht, ähnliche Ergebnisse erzielen zu können.
Auch wenn Bioxtrim als sinnvolles Nahrungsergänzungsmittel zum Abnehmen angepriesen wird, ist es dennoch wichtig, dass Verbraucher neben der Einnahme von Abnehmgummies auf eine ausgewogene Ernährung und ausreichend Bewegung achten. Nur so ist eine nachhaltige und gesunde Gewichtsabnahme gewährleistet.
##Kaufinformationen
Beim Kauf von BioXtrim Gummies ist es wichtig, die Preise und verfügbaren Angebote im Auge zu behalten. Kunden sollten sich auf der offiziellen Website über aktuelle Preise und mögliche Einsparungen durch Coupons oder Rabattcodes informieren.
##Preise
Die Preise für BioXtrim Gummies können je nach Anbieter und Verpackung variieren. Es wird empfohlen, die Preise auf der offiziellen Website zu überprüfen, da diese aktuelle Preise und Informationen direkt vom Hersteller bereitstellt. Darüber hinaus können durch die Bestellung größerer Mengen Kosten gespart werden, da größere Pakete häufig einen reduzierten Preis pro Einheit bieten.
##Verfügbare Gutscheine und Rabattcodes
Die offizielle BioXtrim-Website bietet häufig Gutscheine und Rabattcodes an, die an der Kasse eingelöst werden können. Diese Angebote sind in der Regel zeitlich begrenzt und können zu erheblichen Einsparungen führen. Interessenten sollten die Website regelmäßig besuchen oder sich für den Newsletter anmelden, um Informationen zu aktuellen Rabattaktionen zu erhalten. Eine Tabelle oder Liste der aktuellen Rabatte ist nicht verfügbar, da diese ständig variieren können.
**[Klicken Sie hier, um jetzt auf der offiziellen Website von Bioxtrim zu kaufen](https://slim-gummies-deutschland.de/bioxtrim-de)**
##Richtlinien zu Ernährung und Bewegung
Sowohl Ernährung als auch Bewegung spielen eine wichtige Rolle bei der Gewichtskontrolle mit Bioxtrim Gummies. Benutzer sollten verstehen, wie Bioxtrim mit einer ausgewogenen Ernährung und einem geeigneten Trainingsprogramm effektiv zusammenwirkt.
##Zusammenhang zwischen Ernährung und Bioxtrim
Bioxtrim Gummies werden häufig als Nahrungsergänzungsmittel angepriesen, um Menschen dabei zu helfen, eine kalorienreduzierte Diät einzuhalten und das Abnehmen zu erleichtern. Es ist jedoch wichtig zu betonen, dass eine ausgewogene Ernährung mit viel Gemüse, Vollkorn und magerem Eiweiß weiterhin wichtig ist. Anwender sollten auf eine ausreichende Aufnahme von Makro- und Mikronährstoffen achten, um den Stoffwechsel anzukurbeln und Mangelerscheinungen vorzubeugen.
Kalorienmanagement: Ein realistisches Kaloriendefizit einhalten.
Makronährstoffverteilung: Proteine, Fette und Kohlenhydrate ausgewogen konsumieren.
Flüssigkeitszufuhr: Ausreichend Wasser trinken, um Stoffwechselprozesse zu unterstützen.
##Kombination mit Bewegung
Sport ist ein wesentlicher Bestandteil jedes Gewichtsmanagementprogramms. Er steigert die Kalorienaufnahme und baut Muskelmasse auf, was den Grundumsatz erhöht. Für ein ausgewogenes Trainingsprogramm sollten Anwender sowohl Kraft- als auch Ausdauertraining einbeziehen. Bioxtrim allein kann keine Wunder bewirken; kontinuierliche körperliche Aktivität ist erforderlich, um nachhaltige Gewichtsverlustergebnisse zu erzielen.
Ausdauertraining: Mindestens 150 Minuten moderate Aktivität pro Woche, wie z. B. zügiges Gehen oder Schwimmen.
Krafttraining: Muskelaufbauübungen für alle großen Muskelgruppen mindestens zweimal pro Woche.
##Verbraucherschutz und Herstellerinformationen
In puncto Verbraucherschutz spielen unabhängige Testberichte und Herstellertransparenz eine zentrale Rolle für Verbraucher, die sich für BioXtrim Gummies interessieren.
##Bericht der Stiftung Warentest
Stiftung Warentest hat bisher keine konkreten Ergebnisse zu BioXtrim Gummies veröffentlicht. Dies unterstreicht die Notwendigkeit, dass sich Verbraucher auf andere Quellen und Berichte verlassen müssen, um eine fundierte Entscheidung zu treffen.
##Transparenz des Herstellers
Das Impressum des Herstellers und transparente Informationen sind entscheidend für das Vertrauen der Verbraucher. Bei BioXtrim sollten Sie darauf achten, ob klare Informationen zum Hersteller wie Firmenname, Adresse und Kontaktdaten vorhanden sind. Dies ist ein Indikator für Seriosität und Bereitschaft, Verbraucheranfragen zu beantworten.
**[Klicken Sie hier, um jetzt auf der offiziellen Website von Bioxtrim zu kaufen](https://slim-gummies-deutschland.de/bioxtrim-de)**
|
blackhole33/llama-3-8b-Instruct-bnb-4bit-V3 | blackhole33 | "2024-06-14T11:27:23Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"uz",
"base_model:llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-14T08:12:04Z" | ---
language:
- uz
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: llama-3-8b-Instruct-bnb-4bit
---
# Uploaded model
- **Developed by:** blackhole33
- **License:** apache-2.0
- **Finetuned from model :** llama-3-8b-Instruct-bnb-4bit
|
Coolwowsocoolwow/Hoopla | Coolwowsocoolwow | "2024-06-14T08:21:43Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2024-06-14T08:16:41Z" | ---
license: openrail
---
|
yangyida/llama_3_ecc_transcript_small_2 | yangyida | "2024-06-14T12:03:12Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"text-generation",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-06-14T08:17:28Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
pipeline_tag: text-generation
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** yangyida
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
imdatta0/qwen2_Magiccoder_evol_10k_qlora_ortho | imdatta0 | "2024-06-14T09:49:45Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"unsloth",
"generated_from_trainer",
"base_model:Qwen/Qwen2-7B",
"license:apache-2.0",
"region:us"
] | null | "2024-06-14T08:18:59Z" | ---
license: apache-2.0
library_name: peft
tags:
- unsloth
- generated_from_trainer
base_model: Qwen/Qwen2-7B
model-index:
- name: qwen2_Magiccoder_evol_10k_qlora_ortho
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qwen2_Magiccoder_evol_10k_qlora_ortho
This model is a fine-tuned version of [Qwen/Qwen2-7B](https://huggingface.co/Qwen/Qwen2-7B) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9025
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 0.02
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.8992 | 0.0261 | 4 | 0.9547 |
| 0.9045 | 0.0522 | 8 | 0.9234 |
| 0.9145 | 0.0783 | 12 | 0.9166 |
| 0.8688 | 0.1044 | 16 | 0.9117 |
| 0.9222 | 0.1305 | 20 | 0.9097 |
| 0.8108 | 0.1566 | 24 | 0.9090 |
| 0.8194 | 0.1827 | 28 | 0.9083 |
| 0.9616 | 0.2088 | 32 | 0.9086 |
| 0.8624 | 0.2349 | 36 | 0.9083 |
| 0.8898 | 0.2610 | 40 | 0.9088 |
| 0.9476 | 0.2871 | 44 | 0.9085 |
| 0.9156 | 0.3132 | 48 | 0.9091 |
| 0.8388 | 0.3393 | 52 | 0.9091 |
| 0.8429 | 0.3654 | 56 | 0.9087 |
| 0.8651 | 0.3915 | 60 | 0.9081 |
| 0.9228 | 0.4176 | 64 | 0.9082 |
| 0.9167 | 0.4437 | 68 | 0.9076 |
| 0.8769 | 0.4698 | 72 | 0.9068 |
| 0.9009 | 0.4959 | 76 | 0.9069 |
| 0.8611 | 0.5220 | 80 | 0.9074 |
| 0.9496 | 0.5481 | 84 | 0.9070 |
| 0.8562 | 0.5742 | 88 | 0.9067 |
| 0.943 | 0.6003 | 92 | 0.9060 |
| 0.8718 | 0.6264 | 96 | 0.9053 |
| 0.9642 | 0.6525 | 100 | 0.9046 |
| 0.8425 | 0.6786 | 104 | 0.9042 |
| 0.886 | 0.7047 | 108 | 0.9040 |
| 0.8576 | 0.7308 | 112 | 0.9043 |
| 0.823 | 0.7569 | 116 | 0.9036 |
| 0.8158 | 0.7830 | 120 | 0.9032 |
| 0.8854 | 0.8091 | 124 | 0.9031 |
| 0.8502 | 0.8352 | 128 | 0.9030 |
| 0.9493 | 0.8613 | 132 | 0.9026 |
| 0.8934 | 0.8874 | 136 | 0.9026 |
| 0.9158 | 0.9135 | 140 | 0.9026 |
| 0.8686 | 0.9396 | 144 | 0.9026 |
| 0.9321 | 0.9657 | 148 | 0.9027 |
| 0.8882 | 0.9918 | 152 | 0.9025 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.40.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1 |
ar9av/phi-finetuned-value-charts-fixed | ar9av | "2024-06-14T08:23:07Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"phi3_v",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"region:us"
] | text-generation | "2024-06-14T08:20:35Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
FanFierik/Zerocalcare | FanFierik | "2024-06-14T08:21:41Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-14T08:21:36Z" | Entry not found |
imdatta0/llama_2_13b_Magiccoder_evol_10k_qlora_ortho | imdatta0 | "2024-06-14T11:14:57Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"unsloth",
"generated_from_trainer",
"base_model:unsloth/llama-2-13b-bnb-4bit",
"license:apache-2.0",
"region:us"
] | null | "2024-06-14T08:21:52Z" | ---
license: apache-2.0
library_name: peft
tags:
- unsloth
- generated_from_trainer
base_model: unsloth/llama-2-13b-bnb-4bit
model-index:
- name: llama_2_13b_Magiccoder_evol_10k_qlora_ortho
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama_2_13b_Magiccoder_evol_10k_qlora_ortho
This model is a fine-tuned version of [unsloth/llama-2-13b-bnb-4bit](https://huggingface.co/unsloth/llama-2-13b-bnb-4bit) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0950
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 0.02
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.2067 | 0.0262 | 4 | 1.1823 |
| 1.1675 | 0.0523 | 8 | 1.1498 |
| 1.1004 | 0.0785 | 12 | 1.1349 |
| 1.0531 | 0.1047 | 16 | 1.1288 |
| 1.0946 | 0.1308 | 20 | 1.1246 |
| 1.0602 | 0.1570 | 24 | 1.1215 |
| 1.0636 | 0.1832 | 28 | 1.1175 |
| 1.1078 | 0.2093 | 32 | 1.1151 |
| 1.04 | 0.2355 | 36 | 1.1125 |
| 1.115 | 0.2617 | 40 | 1.1123 |
| 1.0994 | 0.2878 | 44 | 1.1102 |
| 1.1379 | 0.3140 | 48 | 1.1098 |
| 1.1145 | 0.3401 | 52 | 1.1064 |
| 1.0849 | 0.3663 | 56 | 1.1088 |
| 1.1317 | 0.3925 | 60 | 1.1087 |
| 1.134 | 0.4186 | 64 | 1.1056 |
| 1.0856 | 0.4448 | 68 | 1.1038 |
| 1.0972 | 0.4710 | 72 | 1.1004 |
| 1.044 | 0.4971 | 76 | 1.1005 |
| 1.1311 | 0.5233 | 80 | 1.1004 |
| 1.1474 | 0.5495 | 84 | 1.1002 |
| 1.0886 | 0.5756 | 88 | 1.0999 |
| 1.0372 | 0.6018 | 92 | 1.0973 |
| 1.0376 | 0.6280 | 96 | 1.0968 |
| 1.1006 | 0.6541 | 100 | 1.0965 |
| 1.09 | 0.6803 | 104 | 1.0964 |
| 1.0786 | 0.7065 | 108 | 1.0969 |
| 1.111 | 0.7326 | 112 | 1.0970 |
| 1.053 | 0.7588 | 116 | 1.0961 |
| 1.0764 | 0.7850 | 120 | 1.0948 |
| 1.0971 | 0.8111 | 124 | 1.0944 |
| 1.0572 | 0.8373 | 128 | 1.0948 |
| 0.999 | 0.8635 | 132 | 1.0949 |
| 1.1098 | 0.8896 | 136 | 1.0951 |
| 1.0215 | 0.9158 | 140 | 1.0951 |
| 1.0759 | 0.9419 | 144 | 1.0951 |
| 1.096 | 0.9681 | 148 | 1.0950 |
| 1.08 | 0.9943 | 152 | 1.0950 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.40.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1 |
giu-alb/q-FrozenLake-v1-4x4-noSlippery | giu-alb | "2024-06-14T08:22:28Z" | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2024-06-14T08:22:25Z" | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="giu-alb/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
QuyenHY/Lallva3 | QuyenHY | "2024-06-14T08:22:48Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-14T08:22:48Z" | Entry not found |
Ajay098/resent50 | Ajay098 | "2024-06-14T08:24:29Z" | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | "2024-06-14T08:24:29Z" | ---
license: mit
---
|
giu-alb/q-Taxi-v3 | giu-alb | "2024-06-14T08:29:13Z" | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2024-06-14T08:29:11Z" | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.42 +/- 2.77
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="giu-alb/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
DBangshu/GPT2_e7_7_3 | DBangshu | "2024-06-14T08:29:48Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-06-14T08:29:22Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
binhnx8/TTS_v1 | binhnx8 | "2024-06-14T08:38:36Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"parler_tts",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2024-06-14T08:30:29Z" | Entry not found |
ridhoyp/vanka-ai | ridhoyp | "2024-06-14T08:30:48Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-14T08:30:35Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** ridhoyp
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
trambui/Mistral-7B-Instruct-v0.1-sharded | trambui | "2024-06-14T08:31:01Z" | 0 | 0 | null | [
"license:cc",
"region:us"
] | null | "2024-06-14T08:31:01Z" | ---
license: cc
---
|
alibaba-yuanjing-aigclab/ViViD | alibaba-yuanjing-aigclab | "2024-06-17T11:39:34Z" | 0 | 0 | null | [
"arxiv:2405.11794",
"region:us"
] | null | "2024-06-14T08:34:01Z" | # ViViD
ViViD: Video Virtual Try-on using Diffusion Models
[![arXiv](https://img.shields.io/badge/arXiv-2405.11794-b31b1b.svg)](https://arxiv.org/abs/2405.11794)
[![Project Page](https://img.shields.io/badge/Project-Website-green)](https://alibaba-yuanjing-aigclab.github.io/ViViD)
[![Hugging Face Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Models-yellow)](https://huggingface.co/alibaba-yuanjing-aigclab/ViViD)
## Installation
```
git clone https://github.com/alibaba-yuanjing-aigclab/ViViD
cd ViViD
```
### Environment
```
conda create -n vivid python=3.10
conda activate vivid
pip install -r requirements.txt
```
### Weights
You can place the weights anywhere you like, for example, ```./ckpts```. If you put them somewhere else, you just need to update the path in ```./configs/prompts/*.yaml```.
#### Stable Diffusion Image Variations
```
cd ckpts
git lfs install
git clone https://huggingface.co/lambdalabs/sd-image-variations-diffusers
```
#### SD-VAE-ft-mse
```
git lfs install
git clone https://huggingface.co/stabilityai/sd-vae-ft-mse
```
#### Motion Module
Download [mm_sd_v15_v2](https://huggingface.co/guoyww/animatediff/blob/main/mm_sd_v15_v2.ckpt)
#### ViViD
```
git lfs install
git clone git clone https://huggingface.co/alibaba-yuanjing-aigclab/ViViD
```
## Inference
We provide two demos in ```./configs/prompts/```, run the following commands to have a try😼.
```
python vivid.py --config ./configs/prompts/upper1.yaml
python vivid.py --config ./configs/prompts/lower1.yaml
```
## Data
As illustrated in ```./data```, the following data should be provided.
```text
./data/
|-- agnostic
| |-- video1.mp4
| |-- video2.mp4
| ...
|-- agnostic_mask
| |-- video1.mp4
| |-- video2.mp4
| ...
|-- cloth
| |-- cloth1.jpg
| |-- cloth2.jpg
| ...
|-- cloth_mask
| |-- cloth1.jpg
| |-- cloth2.jpg
| ...
|-- densepose
| |-- video1.mp4
| |-- video2.mp4
| ...
|-- videos
| |-- video1.mp4
| |-- video2.mp4
| ...
```
### Agnostic and agnostic_mask video
This part is a bit complex, you can obtain them through any of the following three ways:
1. Follow [OOTDiffusion](https://github.com/levihsu/OOTDiffusion) to extract them frame-by-frame.(recommended)
2. Use [SAM](https://github.com/facebookresearch/segment-anything) + Gaussian Blur.(see ```./tools/sam_agnostic.py``` for an example)
3. Mask editor tools.
Note that the shape and size of the agnostic area may affect the try-on results.
### Densepose video
See [vid2densepose](https://github.com/Flode-Labs/vid2densepose).(Thanks)
### Cloth mask
Any detection tool is ok for obtaining the mask, like [SAM](https://github.com/facebookresearch/segment-anything).
## BibTeX
```text
@misc{fang2024vivid,
title={ViViD: Video Virtual Try-on using Diffusion Models},
author={Zixun Fang and Wei Zhai and Aimin Su and Hongliang Song and Kai Zhu and Mao Wang and Yu Chen and Zhiheng Liu and Yang Cao and Zheng-Jun Zha},
year={2024},
eprint={2405.11794},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
## Contact Us
**Zixun Fang**: [zxfang1130@gmail.com](mailto:zxfang1130@gmail.com)
**Yu Chen**: [chenyu.cheny@alibaba-inc.com](mailto:chenyu.cheny@alibaba-inc.com)
|
GGmorello/gpt2-imdb-llama-rlaif | GGmorello | "2024-06-15T12:52:44Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-06-14T08:34:55Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
againeureka/multiplechoice_robertabase02_hallym_tmp | againeureka | "2024-06-14T08:38:23Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-14T08:38:23Z" | Entry not found |
wepzen/mnlp-rag-datasets | wepzen | "2024-06-14T09:34:05Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-14T08:38:58Z" | Entry not found |
stiucsib/gemma_it_quantfr_mixed | stiucsib | "2024-06-14T08:42:03Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"gptq",
"region:us"
] | text-generation | "2024-06-14T08:40:32Z" | ---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
alireza20008/a1 | alireza20008 | "2024-06-14T08:41:59Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2024-06-14T08:41:59Z" | ---
license: openrail
---
|
mohibovais79/model | mohibovais79 | "2024-06-14T08:43:02Z" | 0 | 0 | transformers | [
"transformers",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-14T08:43:01Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** mohibovais79
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
TimeMobius/Mobius-RWKV-r6-12B | TimeMobius | "2024-06-15T01:20:43Z" | 0 | 4 | null | [
"arxiv:2404.05892",
"license:apache-2.0",
"region:us"
] | null | "2024-06-14T08:44:47Z" | ---
license: apache-2.0
---
This is a experimental model, yet it is the Most powerful RNN model in the world.
# Mobius RWKV r6 chat 12B 16k
Mobius is a RWKV v6 arch chat model, benifit from [Matrix-Valued States and Dynamic Recurrence](https://arxiv.org/abs/2404.05892)
## Introduction
Mobius is a RWKV v6 arch model, a state based RNN+CNN+Transformer Mixed language model pretrained on a certain amount of data.
In comparison with the previous released Mobius, the improvements include:
* Only 24G Vram to run this model locally with fp16;
* Significant performance improvement in chinese;
* Stable support of 16K context length.
* function call support ;
## Usage
Chat format: User: xxxx\n\nAssistant: xxx\n\n
Recommend Temp and topp: 1 0.3
function call format example:
```
System: You are a helpful assistant with access to the following functions. Use them if required -{
"name": "get_exchange_rate",
"description": "Get the exchange rate between two currencies",
"parameters": {
"type": "object",
"properties": {
"base_currency": {
"type": "string",
"description": "The currency to convert from"
},
"target_currency": {
"type": "string",
"description": "The currency to convert to"
}
},
"required": [
"base_currency",
"target_currency"
]
}
}
User: Hi, I need to know the exchange rate from USD to EUR
Assistant: xxxx
Obersavtion: xxxx
Assistant: xxxx
```
## More details
Mobius 12B 16k based on RWKV v6 arch, which is leading state based RNN+CNN+Transformer Mixed large language model which focus opensouce community
* 10~100 trainning/inference cost reduce;
* state based,selected memory, which mean good at grok;
* community support.
## requirements
21.9G vram to run fp16, 13.7G for int8, 7.2 for nf4 with Ai00 server.
* [RWKV Runner](https://github.com/josStorer/RWKV-Runner)
* [Ai00 server](https://github.com/cgisky1980/ai00_rwkv_server)
## Benchmark
ceval 63.53
cmmlu 76.07
|
mohibovais79/lora_model | mohibovais79 | "2024-06-14T08:45:19Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-14T08:45:06Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** mohibovais79
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
vuongminhkhoi4/mv_vton | vuongminhkhoi4 | "2024-06-17T08:52:19Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-14T08:47:54Z" | Entry not found |
milouvollebregt/test | milouvollebregt | "2024-06-14T08:49:35Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-14T08:49:23Z" | # Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Meta-Llama-3-8B")
model = AutoModelForCausalLM.from_pretrained("meta-llama/Meta-Llama-3-8B") |
gordonsong1225/bigbird-document-classifier | gordonsong1225 | "2024-06-14T09:24:05Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2024-06-14T08:49:47Z" | ---
license: apache-2.0
---
Classifiers trained on MuSiQue, HotpotQA, and 2wikimultihopqa datasets respectively, are used to classify whether two passages can contribute to answering the same question. |
RobertML/a-tribe-called-bittensor | RobertML | "2024-06-18T22:40:41Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-14T08:50:22Z" | Entry not found |
ahmedsamirio/mistral-sql-create-context-lora | ahmedsamirio | "2024-06-14T11:26:51Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.3",
"license:apache-2.0",
"region:us"
] | null | "2024-06-14T08:51:59Z" | ---
license: apache-2.0
library_name: peft
tags:
- axolotl
- generated_from_trainer
base_model: mistralai/Mistral-7B-v0.3
model-index:
- name: mistral-sql-create-context-lora
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
base_model: mistralai/Mistral-7B-v0.3
model_type: MistralForCausalLM
tokenizer_type: LlamaTokenizer
load_in_8bit: false
load_in_4bit: false
strict: false
datasets:
- path: b-mc2/sql-create-context
type:
# JSONL file contains question, context, answer fields per line.
# This gets mapped to instruction, input, output axolotl tags.
field_instruction: question
field_input: context
field_output: answer
# Format is used by axolotl to generate the prompt.
format: |-
[INST] Using the schema context below, generate a SQL query that answers the question.
{input}
{instruction} [/INST]
tokens: # add new control tokens from the dataset to the model
- "[INST]"
- " [/INST]"
- "[SQL]"
- " [/SQL]"
dataset_prepared_path:
val_set_size: 0.05
output_dir: ./outputs/mistral-sql-create-context-lora
hub_model_id: ahmedsamirio/mistral-sql-create-context-lora
# This is set to 4096 in the modal config, why?
# Since I'm using sample packing, decreasing the sequence length will create smaller batches
# which can fit better into memory
sequence_len: 8192
# These is set to false in the modal example, why? (Modal also uses FSDP which might be a reason)
sample_packing: true
eval_sample_packing: true
pad_to_sequence_len: true
adapter: lora
lora_model_dir:
lora_r: 32
lora_alpha: 16
lora_dropout: 0.05
lora_target_linear: true
lora_fan_in_fan_out:
lora_modules_to_save: # required when adding new tokens to LLaMA/Mistral
- embed_tokens
- lm_head
lora_target_modules:
- gate_proj
- down_proj
- up_proj
- q_proj
- v_proj
- k_proj
- o_proj
wandb_project: mistral-sql-create-context
wandb_entity: ahmedsamirio
wandb_watch:
wandb_name:
wandb_log_model:
gradient_accumulation_steps: 2
micro_batch_size: 4
num_epochs: 1
optimizer: adamw_bnb_8bit
lr_scheduler: cosine
learning_rate: 0.0002
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
# What is this?
loss_watchdog_threshold: 5.0
loss_watchdog_patience: 3
warmup_steps: 10
evals_per_epoch: 4
eval_table_size:
# This wasn't set in modal config
eval_max_new_tokens: 128
saves_per_epoch: 1
debug:
deepspeed:
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
bos_token: "<s>"
eos_token: "</s>"
unk_token: "<unk>"
```
</details> |
MartaSamoilenko/phoneme_to_text_T5_3b | MartaSamoilenko | "2024-06-14T08:53:32Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-14T08:53:31Z" | Entry not found |
anilbhatt1/peft_phi2_v0_l4500 | anilbhatt1 | "2024-06-14T08:55:02Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-06-14T08:54:27Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
fanny-pack-dork/Llama-3-8b-chat-finetune | fanny-pack-dork | "2024-06-14T09:00:28Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-06-14T08:55:01Z" | Entry not found |
ivan23-24-29/Alvin | ivan23-24-29 | "2024-06-14T08:56:25Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-14T08:56:25Z" | Entry not found |
BiancaG/Mylabelai | BiancaG | "2024-06-14T08:59:27Z" | 0 | 0 | null | [
"license:artistic-2.0",
"region:us"
] | null | "2024-06-14T08:59:27Z" | ---
license: artistic-2.0
---
|
JeremyRivera/my_awesome_model | JeremyRivera | "2024-06-14T09:02:42Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-14T09:02:42Z" | Entry not found |
IntraFind/multilingual-e5-small | IntraFind | "2024-06-14T12:40:28Z" | 0 | 0 | transformers | [
"transformers",
"multilingual",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | "2024-06-14T09:03:50Z" | ---
license: mit
language:
- multilingual
---
This is a copy of [intfloat/multilingual-e5-small](https://huggingface.co/intfloat/multilingual-e5-small)
It contains the ONNX model in zip format & the config.json required by OpenSearch for adding it as a custom local model.
|
Yellownegi/01 | Yellownegi | "2024-06-14T09:04:01Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-14T09:04:01Z" | Entry not found |
Peter2222/codec | Peter2222 | "2024-06-14T09:06:00Z" | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | "2024-06-14T09:04:30Z" | ---
license: mit
---
|
Adrien35/git-base-pokemon | Adrien35 | "2024-06-14T13:26:19Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"git",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-06-14T09:06:07Z" | Entry not found |
Dandan0K/Intervention-xls-FR-Ref | Dandan0K | "2024-06-14T09:11:14Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fr",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-06-14T09:06:09Z" | ---
language:
- fr
license: apache-2.0
tags:
- automatic-speech-recognition
- fr
- hf-asr-leaderboard
- mozilla-foundation/common_voice_8_0
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: XLS-R Wav2Vec2 French by Jonatas Grosman
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: fr
metrics:
- name: Test WER
type: wer
value: 16.85
- name: Test CER
type: cer
value: 4.66
- name: Test WER (+LM)
type: wer
value: 16.32
- name: Test CER (+LM)
type: cer
value: 4.21
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: fr
metrics:
- name: Dev WER
type: wer
value: 22.34
- name: Dev CER
type: cer
value: 9.88
- name: Dev WER (+LM)
type: wer
value: 17.16
- name: Dev CER (+LM)
type: cer
value: 9.38
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: fr
metrics:
- name: Test WER
type: wer
value: 19.15
---
# Fine-tuned XLS-R 1B model for speech recognition in French
Fine-tuned [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on French using the train and validation splits of [Common Voice 8.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0), [MediaSpeech](https://www.openslr.org/108/), [Multilingual TEDx](http://www.openslr.org/100), [Multilingual LibriSpeech](https://www.openslr.org/94/), and [Voxpopuli](https://github.com/facebookresearch/voxpopuli).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool, and thanks to the GPU credits generously given by the [OVHcloud](https://www.ovhcloud.com/en/public-cloud/ai-training/) :)
## Usage
Using the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) library:
```python
from huggingsound import SpeechRecognitionModel
model = SpeechRecognitionModel("jonatasgrosman/wav2vec2-xls-r-1b-french")
audio_paths = ["/path/to/file.mp3", "/path/to/another_file.wav"]
transcriptions = model.transcribe(audio_paths)
```
Writing your own inference script:
```python
import torch
import librosa
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
LANG_ID = "fr"
MODEL_ID = "jonatasgrosman/wav2vec2-xls-r-1b-french"
SAMPLES = 10
test_dataset = load_dataset("common_voice", LANG_ID, split=f"test[:{SAMPLES}]")
processor = Wav2Vec2Processor.from_pretrained(MODEL_ID)
model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000)
batch["speech"] = speech_array
batch["sentence"] = batch["sentence"].upper()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
predicted_sentences = processor.batch_decode(predicted_ids)
```
## Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id jonatasgrosman/wav2vec2-xls-r-1b-french --dataset mozilla-foundation/common_voice_8_0 --config fr --split test
```
2. To evaluate on `speech-recognition-community-v2/dev_data`
```bash
python eval.py --model_id jonatasgrosman/wav2vec2-xls-r-1b-french --dataset speech-recognition-community-v2/dev_data --config fr --split validation --chunk_length_s 5.0 --stride_length_s 1.0
```
## Citation
If you want to cite this model you can use this:
```bibtex
@misc{grosman2021xlsr-1b-french,
title={Fine-tuned {XLS-R} 1{B} model for speech recognition in {F}rench},
author={Grosman, Jonatas},
howpublished={\url{https://huggingface.co/jonatasgrosman/wav2vec2-xls-r-1b-french}},
year={2022}
}
``` |
seungminh/ndxl-13k-blip | seungminh | "2024-06-14T09:07:03Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-14T09:07:03Z" | Entry not found |
DBangshu/GPT2_e7_8_3 | DBangshu | "2024-06-14T09:07:28Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-06-14T09:07:06Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Soumyajit2709/dataset | Soumyajit2709 | "2024-06-14T09:07:07Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-14T09:07:07Z" | Entry not found |
vaibhavprajapati22/Image_Denoising_CBDNet | vaibhavprajapati22 | "2024-06-15T06:17:02Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"image-processing",
"image-denoising",
"deep-learning",
"en",
"arxiv:1807.04686",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-14T09:07:53Z" |
---
language: en
tags:
- image-processing
- image-denoising
- deep-learning
license: apache-2.0
model_name: Image Denoising Model
---
# Convolutional Blind Denoising
This Repo contains the weights and config of CBDNet Model.
# Datasets
- LOL Dataset
- Smartphone Image Denoising Dataset
# References
https://arxiv.org/pdf/1807.04686v2.pdf
|
kg7/HindiGPT | kg7 | "2024-06-14T09:16:35Z" | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | "2024-06-14T09:14:32Z" | ---
license: mit
---
|
kvsreyas17/roberta-base-mitmovie-entity-recognition | kvsreyas17 | "2024-06-14T14:04:17Z" | 0 | 0 | null | [
"en",
"region:us"
] | null | "2024-06-14T09:14:51Z" | ---
language:
- en
metrics:
- accuracy
--- |
HanaTNT/ALX_AI_W1 | HanaTNT | "2024-06-14T09:16:36Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-14T09:16:36Z" | Entry not found |
DavidLacour/zephyr32bitsmilestone2 | DavidLacour | "2024-06-14T09:17:07Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-14T09:17:07Z" | Entry not found |
onyxmytrojin/latencymodel | onyxmytrojin | "2024-06-14T09:17:20Z" | 0 | 0 | null | [
"license:llama3",
"region:us"
] | null | "2024-06-14T09:17:20Z" | ---
license: llama3
---
|
DavidLacour/zephyr4bitsmilestone2 | DavidLacour | "2024-06-14T09:18:06Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-14T09:18:06Z" | Entry not found |
DavidLacour/zephyrmilstone216bits | DavidLacour | "2024-06-14T10:13:23Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-14T09:19:56Z" | Entry not found |
DavidLacour/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 | DavidLacour | "2024-06-14T09:28:43Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/zephyr-sft",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-06-14T09:22:07Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
base_model: unsloth/zephyr-sft
---
# Uploaded model
- **Developed by:** DavidLacour
- **License:** apache-2.0
- **Finetuned from model :** unsloth/zephyr-sft
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Boostaro155/LeptiCell155 | Boostaro155 | "2024-06-14T09:24:11Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-14T09:22:13Z" | # Lepti Cell US Reviews & Benefits – LeptiCell Ingredients, Doses & Intake Price, Buy
Lepti Cell US Reviews & Benefits LeptiCell is a potent weight loss supplement that supports fast and effective weight loss. Its formula is based on recent research that suggests the true root cause of weight gain and weight loss resistance in individuals.
## **[Click Here To Buy Now From Official Website Of LeptiCell](https://slim-gummies-deutschland.de/lepticell-us)**
## Manufacturing Standards Of LeptiCell Supplement
Now, let us look at the manufacturing standards followed by Phytage Labs while formulating LeptiCell. This nutrition brand has made each batch of the formula using 23 potent ingredients in lab facilities that are FDA-approved and GMP-compliant. After this, the supplement has been subjected to clinical testing in third-party labs.
The formula is found to be free from gluten, sugar, GMOs, wheat, salt, starch, soy derivatives, corn, yeast, color, and lactose from all these tests. This is how the LeptiCell manufacturer guarantees safety and quality.
## Dosage And Instructions To Use LeptiCell
Including LeptiCell in your daily routine seems quite simple as the formula comes in capsule form. Each bottle contains 60 capsules, that is, a 30-day serving. The suggested serving is to take two capsules per day at any time, with or without food. Make sure not to exceed the recommended dosage as you might have to deal with side effects like breathing issues, dizziness, and headache.
It is suggested to take the LeptiCell pills regularly for around 2 to 4 months without fail to get visible results. Following a healthy and balanced diet and exercising can help boost the results and also improve overall health.
## LeptiCell Customer Reviews And Complaints
Surveying the customer's responses to a health supplement is the best way to verify if it is safe and effective. Genuine responses to the LeptiCell formula are available on several reliable sources like healthcare forums and review websites. Most reviews are positive with people commenting on the ease of use and the significant weight loss they experienced.
Well, a few complaints are also available pointing to the delay in results. This delay is because the formula is natural and requires time to adapt to each person’s requirements. So, LeptiCell has received an overwhelmingly positive response from customers, garnering an impressive average rating of 4.7 out of 5.
## Who Should And Should Not Use LeptiCell?
The LeptiCell weight loss supplement has been created for all people above the age of 18 years. The dosage of ingredients used is as per the requirements of the adult body. So, children should not take the pills to lose weight. Also, people who have allergies, those with a known medical condition or those taking medications, pregnant or nursing women, and people awaiting surgeries should use LeptiCell only after consulting a health professional.
## Extra Tips To Boost Weight Loss Results
Some additional tips can be included in your daily routine to boost your weight loss results. These are mentioned below:
It is important to balance your plate by including a variety of foods such as protein, fat, and vegetables
Try cardio workouts like running, cycling, swimming, jogging, etc.
## **[Click Here To Buy Now From Official Website Of LeptiCell](https://slim-gummies-deutschland.de/lepticell-us)** |
Sampe12/tuned-palmyra-med | Sampe12 | "2024-06-14T09:41:59Z" | 0 | 0 | null | [
"tensorboard",
"safetensors",
"region:us"
] | null | "2024-06-14T09:25:09Z" | Entry not found |
yuehanui/_teset | yuehanui | "2024-06-14T09:25:37Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-14T09:25:37Z" | Entry not found |
LypWhyton/LypMarkIII | LypWhyton | "2024-06-14T09:27:04Z" | 0 | 0 | null | [
"license:afl-3.0",
"region:us"
] | null | "2024-06-14T09:27:04Z" | ---
license: afl-3.0
---
|