modelId
stringlengths 5
122
| author
stringlengths 2
42
| last_modified
unknown | downloads
int64 0
738M
| likes
int64 0
11k
| library_name
stringclasses 245
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 48
values | createdAt
unknown | card
stringlengths 1
901k
|
---|---|---|---|---|---|---|---|---|---|
ReplaceHumanWithAI/qwen1.5-llm | ReplaceHumanWithAI | "2024-06-16T14:38:11Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-16T14:38:11Z" | Entry not found |
axssel/duncan_robinson | axssel | "2024-06-16T23:24:14Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-16T14:39:00Z" | Entry not found |
anon11112/bikiniunder | anon11112 | "2024-06-16T14:39:54Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-16T14:39:37Z" | Entry not found |
aitorrent/dolphin-2.9.2-qwen2-7b-gguf | aitorrent | "2024-06-16T14:49:43Z" | 0 | 0 | null | [
"torrent",
"text-generation",
"region:us"
] | text-generation | "2024-06-16T14:39:54Z" | ---
quantized_by: bartowski
pipeline_tag: text-generation
tags:
- torrent
---
[![ko-fi](https://ko-fi.com/img/githubbutton_sm.svg)](https://ko-fi.com/U7U2ZEFWU)
## Llamacpp imatrix Quantizations of dolphin-2.9.2-qwen2-7b
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b2965">b2965</a> for quantization.
Original model: https://huggingface.co/cognitivecomputations/dolphin-2.9.2-qwen2-7b
All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/b6ac44691e994344625687afe3263b3a)
## Prompt format
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Downloading using huggingface-cli
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download bartowski/dolphin-2.9.2-qwen2-7b-GGUF --include "dolphin-2.9.2-qwen2-7b-Q4_K_M.gguf" --local-dir ./
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download bartowski/dolphin-2.9.2-qwen2-7b-GGUF --include "dolphin-2.9.2-qwen2-7b-Q8_0.gguf/*" --local-dir dolphin-2.9.2-qwen2-7b-Q8_0
```
You can either specify a new local-dir (dolphin-2.9.2-qwen2-7b-Q8_0) or download them all in place (./)
## Which file should I choose?
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm. |
axssel/christiane_endler | axssel | "2024-06-16T20:13:10Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-16T14:41:17Z" | Entry not found |
TatevK/fintuningLLM | TatevK | "2024-06-16T14:42:01Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-16T14:42:01Z" | Entry not found |
axssel/raven_chileno | axssel | "2024-06-16T14:43:26Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-16T14:43:26Z" | Entry not found |
Akshay203/ak_lora_model_appointment | Akshay203 | "2024-06-16T14:48:26Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-16T14:48:15Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** Akshay203
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
tansw/mistral-instruct-reddit | tansw | "2024-06-16T14:48:34Z" | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"region:us"
] | null | "2024-06-16T14:48:28Z" | ---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: mistralai/Mistral-7B-Instruct-v0.2
model-index:
- name: mistral-instruct-reddit
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral-instruct-reddit
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- training_steps: 100
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.11.1
- Transformers 4.41.2
- Pytorch 2.3.1+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1 |
axssel/marcela_cubillos | axssel | "2024-06-16T15:23:53Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-16T14:51:05Z" | Entry not found |
anon11112/jenna | anon11112 | "2024-06-16T14:53:18Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-16T14:52:39Z" | Entry not found |
moschouChry/chronos-t5-finetuned_tiny_1-Patient0-fine-tuned_20240616_175107 | moschouChry | "2024-06-16T14:54:44Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | "2024-06-16T14:53:11Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
anon11112/realistic | anon11112 | "2024-06-16T14:55:06Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-16T14:54:03Z" | Entry not found |
aitorrent/dolphin-2.9.2-qwen2-72b-gguf | aitorrent | "2024-06-16T15:19:32Z" | 0 | 0 | null | [
"torrent",
"text-generation",
"region:us"
] | text-generation | "2024-06-16T14:54:53Z" | ---
quantized_by: bartowski
pipeline_tag: text-generation
tags:
- torrent
---
[![ko-fi](https://ko-fi.com/img/githubbutton_sm.svg)](https://ko-fi.com/U7U2ZEFWU)
## Which file should I choose?
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm. |
KeroroK66/SubaruOozora | KeroroK66 | "2024-06-16T14:55:46Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2024-06-16T14:55:13Z" | ---
license: openrail
---
|
anon11112/sexyattire | anon11112 | "2024-06-16T14:57:05Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-16T14:56:05Z" | Entry not found |
moschouChry/chronos-t5-finetuned_tiny_1-Patient0-fine-tuned_20240616_175503 | moschouChry | "2024-06-16T14:57:13Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-16T14:57:13Z" | Entry not found |
whizzzzkid/G_59000 | whizzzzkid | "2024-06-16T14:59:00Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-16T14:58:28Z" | Entry not found |
whizzzzkid/G_58000 | whizzzzkid | "2024-06-16T15:00:28Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-16T14:59:56Z" | Entry not found |
moschouChry/chronos-t5-finetuned_tiny_1-Patient0-fine-tuned_20240616_175811 | moschouChry | "2024-06-16T15:01:52Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | "2024-06-16T15:00:14Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
RaNgO11/text-to-image | RaNgO11 | "2024-06-16T15:00:48Z" | 0 | 0 | null | [
"en",
"region:us"
] | null | "2024-06-16T15:00:16Z" | ---
language:
- en
--- |
MrezaPRZ/codellama_database_learning_synthetic_data_bird_dev_set_with_knowledge | MrezaPRZ | "2024-06-16T15:06:35Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-06-16T15:01:29Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
kim512/Llama-3-70b-Arimas-story-RP-V1.6-3.0bpw-h6-exl2 | kim512 | "2024-06-17T04:08:25Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"llama 3",
"70b",
"arimas",
"story",
"roleplay",
"rp",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"3-bit",
"exl2",
"region:us"
] | text-generation | "2024-06-16T15:01:36Z" | ---
base_model: []
library_name: transformers
tags:
- mergekit
- merge
- llama 3
- 70b
- arimas
- story
- roleplay
- rp
---
# EXL2 quants of [ryzen88/Llama-3-70b-Arimas-story-RP-V1.6](https://huggingface.co/ryzen88/Llama-3-70b-Arimas-story-RP-V1.6)
[3.00 bits per weight](https://huggingface.co/kim512/Llama-3-70b-Arimas-story-RP-V1.6-3.0bpw-h6-exl2)
[3.50 bits per weight](https://huggingface.co/kim512/Llama-3-70b-Arimas-story-RP-V1.6-3.5bpw-h6-exl2)
[4.00 bits per weight](https://huggingface.co/kim512/Llama-3-70b-Arimas-story-RP-V1.6-4.0bpw-h6-exl2)
[4.50 bits per weight](https://huggingface.co/kim512/Llama-3-70b-Arimas-story-RP-V1.6-4.5bpw-h6-exl2)
[6.00 bits per weight](https://huggingface.co/kim512/Llama-3-70b-Arimas-story-RP-V1.6-6.0bpw-h6-exl2)
[8.00 bits per weight](https://huggingface.co/kim512/Llama-3-70b-Arimas-story-RP-V1.6-8.0bpw-h8-exl2)
Created using the defaults from exllamav2 1.4.0 convert.py
3.0bpw to 6.0bpw head bits = 6
8.0bpw head bits = 8
length = 8192
dataset rows = 200
measurement rows = 32
measurement length = 8192
# model
Llama-3-70b-Arimas-story-RP-V1.6
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
I Greatly expanded the amount of models used in this merge, experimented a lot with different idea's.
This version feels a lot more convincing than V1.5 Hopefully the long context window will also remain strong after Quants.
Because of the many merges switched back from BFloat to Float.
Tried breadcrums without the Ties, that went very poorly.
### Merge Method
This model was merged using the breadcrumbs_ties merge method using I:\Llama-3-70B-Instruct-Gradient-262k as a base.
### Models Merged
The following models were included in the merge:
* \Smaug-Llama-3-70B-Instruct
* \Meta-LLama-3-Cat-Smaug-LLama-70b
* \Meta-LLama-3-Cat-A-LLama-70b
* \Llama-3-70B-Synthia-v3.5
* \Llama-3-70B-Instruct-Gradient-524k
* \Llama-3-70B-Instruct-Gradient-262k
* \Tess-2.0-Llama-3-70B-v0.2
* \Llama-3-Lumimaid-70B-v0.1-alt
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: \Llama-3-70B-Instruct-Gradient-262k
parameters:
weight: 0.25
density: 0.90
gamma: 0.01
- model: \Meta-LLama-3-Cat-Smaug-LLama-70b
parameters:
weight: 0.28
density: 0.90
gamma: 0.01
- model: \Llama-3-Lumimaid-70B-v0.1-alt
parameters:
weight: 0.15
density: 0.90
gamma: 0.01
- model: \Tess-2.0-Llama-3-70B-v0.2
parameters:
weight: 0.06
density: 0.90
gamma: 0.01
- model: \Smaug-Llama-3-70B-Instruct
parameters:
weight: 0.04
density: 0.90
gamma: 0.01
- model: \Llama-3-70B-Synthia-v3.5
parameters:
weight: 0.05
density: 0.90
gamma: 0.01
- model: \Llama-3-70B-Instruct-Gradient-524k
parameters:
weight: 0.03
density: 0.90
gamma: 0.01
- model: \Meta-LLama-3-Cat-A-LLama-70b
parameters:
weight: 0.14
density: 0.90
gamma: 0.01
merge_method: breadcrumbs_ties
base_model: I:\Llama-3-70B-Instruct-Gradient-262k
dtype: float16
``` |
kim512/Llama-3-70b-Arimas-story-RP-V1.6-3.5bpw-h6-exl2 | kim512 | "2024-06-17T04:08:26Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"llama 3",
"70b",
"arimas",
"story",
"roleplay",
"rp",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"exl2",
"region:us"
] | text-generation | "2024-06-16T15:01:43Z" | ---
base_model: []
library_name: transformers
tags:
- mergekit
- merge
- llama 3
- 70b
- arimas
- story
- roleplay
- rp
---
# EXL2 quants of [ryzen88/Llama-3-70b-Arimas-story-RP-V1.6](https://huggingface.co/ryzen88/Llama-3-70b-Arimas-story-RP-V1.6)
[3.00 bits per weight](https://huggingface.co/kim512/Llama-3-70b-Arimas-story-RP-V1.6-3.0bpw-h6-exl2)
[3.50 bits per weight](https://huggingface.co/kim512/Llama-3-70b-Arimas-story-RP-V1.6-3.5bpw-h6-exl2)
[4.00 bits per weight](https://huggingface.co/kim512/Llama-3-70b-Arimas-story-RP-V1.6-4.0bpw-h6-exl2)
[4.50 bits per weight](https://huggingface.co/kim512/Llama-3-70b-Arimas-story-RP-V1.6-4.5bpw-h6-exl2)
[6.00 bits per weight](https://huggingface.co/kim512/Llama-3-70b-Arimas-story-RP-V1.6-6.0bpw-h6-exl2)
[8.00 bits per weight](https://huggingface.co/kim512/Llama-3-70b-Arimas-story-RP-V1.6-8.0bpw-h8-exl2)
Created using the defaults from exllamav2 1.4.0 convert.py
3.0bpw to 6.0bpw head bits = 6
8.0bpw head bits = 8
length = 8192
dataset rows = 200
measurement rows = 32
measurement length = 8192
# model
Llama-3-70b-Arimas-story-RP-V1.6
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
I Greatly expanded the amount of models used in this merge, experimented a lot with different idea's.
This version feels a lot more convincing than V1.5 Hopefully the long context window will also remain strong after Quants.
Because of the many merges switched back from BFloat to Float.
Tried breadcrums without the Ties, that went very poorly.
### Merge Method
This model was merged using the breadcrumbs_ties merge method using I:\Llama-3-70B-Instruct-Gradient-262k as a base.
### Models Merged
The following models were included in the merge:
* \Smaug-Llama-3-70B-Instruct
* \Meta-LLama-3-Cat-Smaug-LLama-70b
* \Meta-LLama-3-Cat-A-LLama-70b
* \Llama-3-70B-Synthia-v3.5
* \Llama-3-70B-Instruct-Gradient-524k
* \Llama-3-70B-Instruct-Gradient-262k
* \Tess-2.0-Llama-3-70B-v0.2
* \Llama-3-Lumimaid-70B-v0.1-alt
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: \Llama-3-70B-Instruct-Gradient-262k
parameters:
weight: 0.25
density: 0.90
gamma: 0.01
- model: \Meta-LLama-3-Cat-Smaug-LLama-70b
parameters:
weight: 0.28
density: 0.90
gamma: 0.01
- model: \Llama-3-Lumimaid-70B-v0.1-alt
parameters:
weight: 0.15
density: 0.90
gamma: 0.01
- model: \Tess-2.0-Llama-3-70B-v0.2
parameters:
weight: 0.06
density: 0.90
gamma: 0.01
- model: \Smaug-Llama-3-70B-Instruct
parameters:
weight: 0.04
density: 0.90
gamma: 0.01
- model: \Llama-3-70B-Synthia-v3.5
parameters:
weight: 0.05
density: 0.90
gamma: 0.01
- model: \Llama-3-70B-Instruct-Gradient-524k
parameters:
weight: 0.03
density: 0.90
gamma: 0.01
- model: \Meta-LLama-3-Cat-A-LLama-70b
parameters:
weight: 0.14
density: 0.90
gamma: 0.01
merge_method: breadcrumbs_ties
base_model: I:\Llama-3-70B-Instruct-Gradient-262k
dtype: float16
``` |
kim512/Llama-3-70b-Arimas-story-RP-V1.6-4.0bpw-h6-exl2 | kim512 | "2024-06-17T04:08:27Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"llama 3",
"70b",
"arimas",
"story",
"roleplay",
"rp",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"exl2",
"region:us"
] | text-generation | "2024-06-16T15:01:49Z" | ---
base_model: []
library_name: transformers
tags:
- mergekit
- merge
- llama 3
- 70b
- arimas
- story
- roleplay
- rp
---
# EXL2 quants of [ryzen88/Llama-3-70b-Arimas-story-RP-V1.6](https://huggingface.co/ryzen88/Llama-3-70b-Arimas-story-RP-V1.6)
[3.00 bits per weight](https://huggingface.co/kim512/Llama-3-70b-Arimas-story-RP-V1.6-3.0bpw-h6-exl2)
[3.50 bits per weight](https://huggingface.co/kim512/Llama-3-70b-Arimas-story-RP-V1.6-3.5bpw-h6-exl2)
[4.00 bits per weight](https://huggingface.co/kim512/Llama-3-70b-Arimas-story-RP-V1.6-4.0bpw-h6-exl2)
[4.50 bits per weight](https://huggingface.co/kim512/Llama-3-70b-Arimas-story-RP-V1.6-4.5bpw-h6-exl2)
[6.00 bits per weight](https://huggingface.co/kim512/Llama-3-70b-Arimas-story-RP-V1.6-6.0bpw-h6-exl2)
[8.00 bits per weight](https://huggingface.co/kim512/Llama-3-70b-Arimas-story-RP-V1.6-8.0bpw-h8-exl2)
Created using the defaults from exllamav2 1.4.0 convert.py
3.0bpw to 6.0bpw head bits = 6
8.0bpw head bits = 8
length = 8192
dataset rows = 200
measurement rows = 32
measurement length = 8192
# model
Llama-3-70b-Arimas-story-RP-V1.6
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
I Greatly expanded the amount of models used in this merge, experimented a lot with different idea's.
This version feels a lot more convincing than V1.5 Hopefully the long context window will also remain strong after Quants.
Because of the many merges switched back from BFloat to Float.
Tried breadcrums without the Ties, that went very poorly.
### Merge Method
This model was merged using the breadcrumbs_ties merge method using I:\Llama-3-70B-Instruct-Gradient-262k as a base.
### Models Merged
The following models were included in the merge:
* \Smaug-Llama-3-70B-Instruct
* \Meta-LLama-3-Cat-Smaug-LLama-70b
* \Meta-LLama-3-Cat-A-LLama-70b
* \Llama-3-70B-Synthia-v3.5
* \Llama-3-70B-Instruct-Gradient-524k
* \Llama-3-70B-Instruct-Gradient-262k
* \Tess-2.0-Llama-3-70B-v0.2
* \Llama-3-Lumimaid-70B-v0.1-alt
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: \Llama-3-70B-Instruct-Gradient-262k
parameters:
weight: 0.25
density: 0.90
gamma: 0.01
- model: \Meta-LLama-3-Cat-Smaug-LLama-70b
parameters:
weight: 0.28
density: 0.90
gamma: 0.01
- model: \Llama-3-Lumimaid-70B-v0.1-alt
parameters:
weight: 0.15
density: 0.90
gamma: 0.01
- model: \Tess-2.0-Llama-3-70B-v0.2
parameters:
weight: 0.06
density: 0.90
gamma: 0.01
- model: \Smaug-Llama-3-70B-Instruct
parameters:
weight: 0.04
density: 0.90
gamma: 0.01
- model: \Llama-3-70B-Synthia-v3.5
parameters:
weight: 0.05
density: 0.90
gamma: 0.01
- model: \Llama-3-70B-Instruct-Gradient-524k
parameters:
weight: 0.03
density: 0.90
gamma: 0.01
- model: \Meta-LLama-3-Cat-A-LLama-70b
parameters:
weight: 0.14
density: 0.90
gamma: 0.01
merge_method: breadcrumbs_ties
base_model: I:\Llama-3-70B-Instruct-Gradient-262k
dtype: float16
``` |
kim512/Llama-3-70b-Arimas-story-RP-V1.6-4.5bpw-h6-exl2 | kim512 | "2024-06-17T05:35:30Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"llama 3",
"70b",
"arimas",
"story",
"roleplay",
"rp",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"exl2",
"region:us"
] | text-generation | "2024-06-16T15:01:56Z" | ---
base_model: []
library_name: transformers
tags:
- mergekit
- merge
- llama 3
- 70b
- arimas
- story
- roleplay
- rp
---
# EXL2 quants of [ryzen88/Llama-3-70b-Arimas-story-RP-V1.6](https://huggingface.co/ryzen88/Llama-3-70b-Arimas-story-RP-V1.6)
[3.00 bits per weight](https://huggingface.co/kim512/Llama-3-70b-Arimas-story-RP-V1.6-3.0bpw-h6-exl2)
[3.50 bits per weight](https://huggingface.co/kim512/Llama-3-70b-Arimas-story-RP-V1.6-3.5bpw-h6-exl2)
[4.00 bits per weight](https://huggingface.co/kim512/Llama-3-70b-Arimas-story-RP-V1.6-4.0bpw-h6-exl2)
[4.50 bits per weight](https://huggingface.co/kim512/Llama-3-70b-Arimas-story-RP-V1.6-4.5bpw-h6-exl2)
[6.00 bits per weight](https://huggingface.co/kim512/Llama-3-70b-Arimas-story-RP-V1.6-6.0bpw-h6-exl2)
[8.00 bits per weight](https://huggingface.co/kim512/Llama-3-70b-Arimas-story-RP-V1.6-8.0bpw-h8-exl2)
Created using the defaults from exllamav2 1.4.0 convert.py
3.0bpw to 6.0bpw head bits = 6
8.0bpw head bits = 8
length = 8192
dataset rows = 200
measurement rows = 32
measurement length = 8192
# model
Llama-3-70b-Arimas-story-RP-V1.6
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
I Greatly expanded the amount of models used in this merge, experimented a lot with different idea's.
This version feels a lot more convincing than V1.5 Hopefully the long context window will also remain strong after Quants.
Because of the many merges switched back from BFloat to Float.
Tried breadcrums without the Ties, that went very poorly.
### Merge Method
This model was merged using the breadcrumbs_ties merge method using I:\Llama-3-70B-Instruct-Gradient-262k as a base.
### Models Merged
The following models were included in the merge:
* \Smaug-Llama-3-70B-Instruct
* \Meta-LLama-3-Cat-Smaug-LLama-70b
* \Meta-LLama-3-Cat-A-LLama-70b
* \Llama-3-70B-Synthia-v3.5
* \Llama-3-70B-Instruct-Gradient-524k
* \Llama-3-70B-Instruct-Gradient-262k
* \Tess-2.0-Llama-3-70B-v0.2
* \Llama-3-Lumimaid-70B-v0.1-alt
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: \Llama-3-70B-Instruct-Gradient-262k
parameters:
weight: 0.25
density: 0.90
gamma: 0.01
- model: \Meta-LLama-3-Cat-Smaug-LLama-70b
parameters:
weight: 0.28
density: 0.90
gamma: 0.01
- model: \Llama-3-Lumimaid-70B-v0.1-alt
parameters:
weight: 0.15
density: 0.90
gamma: 0.01
- model: \Tess-2.0-Llama-3-70B-v0.2
parameters:
weight: 0.06
density: 0.90
gamma: 0.01
- model: \Smaug-Llama-3-70B-Instruct
parameters:
weight: 0.04
density: 0.90
gamma: 0.01
- model: \Llama-3-70B-Synthia-v3.5
parameters:
weight: 0.05
density: 0.90
gamma: 0.01
- model: \Llama-3-70B-Instruct-Gradient-524k
parameters:
weight: 0.03
density: 0.90
gamma: 0.01
- model: \Meta-LLama-3-Cat-A-LLama-70b
parameters:
weight: 0.14
density: 0.90
gamma: 0.01
merge_method: breadcrumbs_ties
base_model: I:\Llama-3-70B-Instruct-Gradient-262k
dtype: float16
``` |
kim512/Llama-3-70b-Arimas-story-RP-V1.6-6.0bpw-h6-exl2 | kim512 | "2024-06-17T04:08:30Z" | 0 | 0 | transformers | [
"transformers",
"mergekit",
"merge",
"llama 3",
"70b",
"arimas",
"story",
"roleplay",
"rp",
"endpoints_compatible",
"region:us"
] | null | "2024-06-16T15:02:02Z" | ---
base_model: []
library_name: transformers
tags:
- mergekit
- merge
- llama 3
- 70b
- arimas
- story
- roleplay
- rp
---
# EXL2 quants of [ryzen88/Llama-3-70b-Arimas-story-RP-V1.6](https://huggingface.co/ryzen88/Llama-3-70b-Arimas-story-RP-V1.6)
[3.00 bits per weight](https://huggingface.co/kim512/Llama-3-70b-Arimas-story-RP-V1.6-3.0bpw-h6-exl2)
[3.50 bits per weight](https://huggingface.co/kim512/Llama-3-70b-Arimas-story-RP-V1.6-3.5bpw-h6-exl2)
[4.00 bits per weight](https://huggingface.co/kim512/Llama-3-70b-Arimas-story-RP-V1.6-4.0bpw-h6-exl2)
[4.50 bits per weight](https://huggingface.co/kim512/Llama-3-70b-Arimas-story-RP-V1.6-4.5bpw-h6-exl2)
[6.00 bits per weight](https://huggingface.co/kim512/Llama-3-70b-Arimas-story-RP-V1.6-6.0bpw-h6-exl2)
[8.00 bits per weight](https://huggingface.co/kim512/Llama-3-70b-Arimas-story-RP-V1.6-8.0bpw-h8-exl2)
Created using the defaults from exllamav2 1.4.0 convert.py
3.0bpw to 6.0bpw head bits = 6
8.0bpw head bits = 8
length = 8192
dataset rows = 200
measurement rows = 32
measurement length = 8192
# model
Llama-3-70b-Arimas-story-RP-V1.6
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
I Greatly expanded the amount of models used in this merge, experimented a lot with different idea's.
This version feels a lot more convincing than V1.5 Hopefully the long context window will also remain strong after Quants.
Because of the many merges switched back from BFloat to Float.
Tried breadcrums without the Ties, that went very poorly.
### Merge Method
This model was merged using the breadcrumbs_ties merge method using I:\Llama-3-70B-Instruct-Gradient-262k as a base.
### Models Merged
The following models were included in the merge:
* \Smaug-Llama-3-70B-Instruct
* \Meta-LLama-3-Cat-Smaug-LLama-70b
* \Meta-LLama-3-Cat-A-LLama-70b
* \Llama-3-70B-Synthia-v3.5
* \Llama-3-70B-Instruct-Gradient-524k
* \Llama-3-70B-Instruct-Gradient-262k
* \Tess-2.0-Llama-3-70B-v0.2
* \Llama-3-Lumimaid-70B-v0.1-alt
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: \Llama-3-70B-Instruct-Gradient-262k
parameters:
weight: 0.25
density: 0.90
gamma: 0.01
- model: \Meta-LLama-3-Cat-Smaug-LLama-70b
parameters:
weight: 0.28
density: 0.90
gamma: 0.01
- model: \Llama-3-Lumimaid-70B-v0.1-alt
parameters:
weight: 0.15
density: 0.90
gamma: 0.01
- model: \Tess-2.0-Llama-3-70B-v0.2
parameters:
weight: 0.06
density: 0.90
gamma: 0.01
- model: \Smaug-Llama-3-70B-Instruct
parameters:
weight: 0.04
density: 0.90
gamma: 0.01
- model: \Llama-3-70B-Synthia-v3.5
parameters:
weight: 0.05
density: 0.90
gamma: 0.01
- model: \Llama-3-70B-Instruct-Gradient-524k
parameters:
weight: 0.03
density: 0.90
gamma: 0.01
- model: \Meta-LLama-3-Cat-A-LLama-70b
parameters:
weight: 0.14
density: 0.90
gamma: 0.01
merge_method: breadcrumbs_ties
base_model: I:\Llama-3-70B-Instruct-Gradient-262k
dtype: float16
``` |
kim512/Llama-3-70b-Arimas-story-RP-V1.6-8.0bpw-h8-exl2 | kim512 | "2024-06-17T04:08:31Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"llama 3",
"70b",
"arimas",
"story",
"roleplay",
"rp",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"8-bit",
"exl2",
"region:us"
] | text-generation | "2024-06-16T15:02:11Z" | ---
base_model: []
library_name: transformers
tags:
- mergekit
- merge
- llama 3
- 70b
- arimas
- story
- roleplay
- rp
---
# EXL2 quants of [ryzen88/Llama-3-70b-Arimas-story-RP-V1.6](https://huggingface.co/ryzen88/Llama-3-70b-Arimas-story-RP-V1.6)
[3.00 bits per weight](https://huggingface.co/kim512/Llama-3-70b-Arimas-story-RP-V1.6-3.0bpw-h6-exl2)
[3.50 bits per weight](https://huggingface.co/kim512/Llama-3-70b-Arimas-story-RP-V1.6-3.5bpw-h6-exl2)
[4.00 bits per weight](https://huggingface.co/kim512/Llama-3-70b-Arimas-story-RP-V1.6-4.0bpw-h6-exl2)
[4.50 bits per weight](https://huggingface.co/kim512/Llama-3-70b-Arimas-story-RP-V1.6-4.5bpw-h6-exl2)
[6.00 bits per weight](https://huggingface.co/kim512/Llama-3-70b-Arimas-story-RP-V1.6-6.0bpw-h6-exl2)
[8.00 bits per weight](https://huggingface.co/kim512/Llama-3-70b-Arimas-story-RP-V1.6-8.0bpw-h8-exl2)
Created using the defaults from exllamav2 1.4.0 convert.py
3.0bpw to 6.0bpw head bits = 6
8.0bpw head bits = 8
length = 8192
dataset rows = 200
measurement rows = 32
measurement length = 8192
# model
Llama-3-70b-Arimas-story-RP-V1.6
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
I Greatly expanded the amount of models used in this merge, experimented a lot with different idea's.
This version feels a lot more convincing than V1.5 Hopefully the long context window will also remain strong after Quants.
Because of the many merges switched back from BFloat to Float.
Tried breadcrums without the Ties, that went very poorly.
### Merge Method
This model was merged using the breadcrumbs_ties merge method using I:\Llama-3-70B-Instruct-Gradient-262k as a base.
### Models Merged
The following models were included in the merge:
* \Smaug-Llama-3-70B-Instruct
* \Meta-LLama-3-Cat-Smaug-LLama-70b
* \Meta-LLama-3-Cat-A-LLama-70b
* \Llama-3-70B-Synthia-v3.5
* \Llama-3-70B-Instruct-Gradient-524k
* \Llama-3-70B-Instruct-Gradient-262k
* \Tess-2.0-Llama-3-70B-v0.2
* \Llama-3-Lumimaid-70B-v0.1-alt
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: \Llama-3-70B-Instruct-Gradient-262k
parameters:
weight: 0.25
density: 0.90
gamma: 0.01
- model: \Meta-LLama-3-Cat-Smaug-LLama-70b
parameters:
weight: 0.28
density: 0.90
gamma: 0.01
- model: \Llama-3-Lumimaid-70B-v0.1-alt
parameters:
weight: 0.15
density: 0.90
gamma: 0.01
- model: \Tess-2.0-Llama-3-70B-v0.2
parameters:
weight: 0.06
density: 0.90
gamma: 0.01
- model: \Smaug-Llama-3-70B-Instruct
parameters:
weight: 0.04
density: 0.90
gamma: 0.01
- model: \Llama-3-70B-Synthia-v3.5
parameters:
weight: 0.05
density: 0.90
gamma: 0.01
- model: \Llama-3-70B-Instruct-Gradient-524k
parameters:
weight: 0.03
density: 0.90
gamma: 0.01
- model: \Meta-LLama-3-Cat-A-LLama-70b
parameters:
weight: 0.14
density: 0.90
gamma: 0.01
merge_method: breadcrumbs_ties
base_model: I:\Llama-3-70B-Instruct-Gradient-262k
dtype: float16
``` |
moschouChry/chronos-t5-finetuned_tiny_1-Patient0-fine-tuned_20240616_180200 | moschouChry | "2024-06-16T15:04:04Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-16T15:04:04Z" | Entry not found |
HareRamaCh/results | HareRamaCh | "2024-06-16T15:04:18Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-16T15:04:18Z" | Entry not found |
Ecommarocchino/Jaw | Ecommarocchino | "2024-06-16T15:05:33Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-16T15:05:33Z" | Entry not found |
nathanhunt/w2v-bert-2.0-mongolian-colab-CV16.0 | nathanhunt | "2024-06-16T15:08:14Z" | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-06-16T15:08:03Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Dhahlan2000/Simple_Translation-model-for-GPT-v16 | Dhahlan2000 | "2024-06-16T15:51:47Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | "2024-06-16T15:10:52Z" | Entry not found |
Yuki20/llama3_8b_instruct_aci_5e | Yuki20 | "2024-06-16T15:11:01Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-16T15:10:54Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
---
# Uploaded model
- **Developed by:** Yuki20
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
aaalby/hyein | aaalby | "2024-06-16T15:18:48Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2024-06-16T15:18:09Z" | ---
license: openrail
---
|
xjw1001001/lora_vit_code | xjw1001001 | "2024-06-16T15:23:08Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-16T15:20:26Z" | Entry not found |
ChengSyuen/llama-3-8b-chat-finetuned | ChengSyuen | "2024-06-17T16:17:14Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-06-16T15:23:13Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
aitorrent/dolphin-2.9.2-Phi-3-Medium-abliterated-GGUF-torrent | aitorrent | "2024-06-16T15:36:42Z" | 0 | 0 | transformers | [
"transformers",
"torrent",
"en",
"dataset:cognitivecomputations/Dolphin-2.9.2",
"dataset:teknium/OpenHermes-2.5",
"dataset:m-a-p/CodeFeedback-Filtered-Instruction",
"dataset:cognitivecomputations/dolphin-coder",
"dataset:cognitivecomputations/samantha-data",
"dataset:microsoft/orca-math-word-problems-200k",
"dataset:internlm/Agent-FLAN",
"dataset:cognitivecomputations/SystemChat-2.0",
"base_model:cognitivecomputations/dolphin-2.9.2-Phi-3-Medium-abliterated",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | "2024-06-16T15:23:29Z" | ---
base_model: cognitivecomputations/dolphin-2.9.2-Phi-3-Medium-abliterated
datasets:
- cognitivecomputations/Dolphin-2.9.2
- teknium/OpenHermes-2.5
- m-a-p/CodeFeedback-Filtered-Instruction
- cognitivecomputations/dolphin-coder
- cognitivecomputations/samantha-data
- microsoft/orca-math-word-problems-200k
- internlm/Agent-FLAN
- cognitivecomputations/SystemChat-2.0
language:
- en
library_name: transformers
license: mit
quantized_by: mradermacher
tags:
- torrent
---
[![ko-fi](https://ko-fi.com/img/githubbutton_sm.svg)](https://ko-fi.com/U7U2ZEFWU)
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/cognitivecomputations/dolphin-2.9.2-Phi-3-Medium-abliterated
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/dolphin-2.9.2-Phi-3-Medium-abliterated-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.2-Phi-3-Medium-abliterated-GGUF/resolve/main/dolphin-2.9.2-Phi-3-Medium-abliterated.Q2_K.gguf) | Q2_K | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.2-Phi-3-Medium-abliterated-GGUF/resolve/main/dolphin-2.9.2-Phi-3-Medium-abliterated.IQ3_XS.gguf) | IQ3_XS | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.2-Phi-3-Medium-abliterated-GGUF/resolve/main/dolphin-2.9.2-Phi-3-Medium-abliterated.Q3_K_S.gguf) | Q3_K_S | 6.2 | |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.2-Phi-3-Medium-abliterated-GGUF/resolve/main/dolphin-2.9.2-Phi-3-Medium-abliterated.IQ3_S.gguf) | IQ3_S | 6.2 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.2-Phi-3-Medium-abliterated-GGUF/resolve/main/dolphin-2.9.2-Phi-3-Medium-abliterated.IQ3_M.gguf) | IQ3_M | 6.4 | |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.2-Phi-3-Medium-abliterated-GGUF/resolve/main/dolphin-2.9.2-Phi-3-Medium-abliterated.Q3_K_M.gguf) | Q3_K_M | 6.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.2-Phi-3-Medium-abliterated-GGUF/resolve/main/dolphin-2.9.2-Phi-3-Medium-abliterated.Q3_K_L.gguf) | Q3_K_L | 7.4 | |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.2-Phi-3-Medium-abliterated-GGUF/resolve/main/dolphin-2.9.2-Phi-3-Medium-abliterated.IQ4_XS.gguf) | IQ4_XS | 7.7 | |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.2-Phi-3-Medium-abliterated-GGUF/resolve/main/dolphin-2.9.2-Phi-3-Medium-abliterated.Q4_K_S.gguf) | Q4_K_S | 8.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.2-Phi-3-Medium-abliterated-GGUF/resolve/main/dolphin-2.9.2-Phi-3-Medium-abliterated.Q4_K_M.gguf) | Q4_K_M | 8.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.2-Phi-3-Medium-abliterated-GGUF/resolve/main/dolphin-2.9.2-Phi-3-Medium-abliterated.Q5_K_S.gguf) | Q5_K_S | 9.7 | |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.2-Phi-3-Medium-abliterated-GGUF/resolve/main/dolphin-2.9.2-Phi-3-Medium-abliterated.Q5_K_M.gguf) | Q5_K_M | 10.0 | |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.2-Phi-3-Medium-abliterated-GGUF/resolve/main/dolphin-2.9.2-Phi-3-Medium-abliterated.Q6_K.gguf) | Q6_K | 11.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.2-Phi-3-Medium-abliterated-GGUF/resolve/main/dolphin-2.9.2-Phi-3-Medium-abliterated.Q8_0.gguf) | Q8_0 | 14.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)
And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 |
MG31/v8_11_safetensors | MG31 | "2024-06-16T15:36:26Z" | 0 | 0 | null | [
"object-detection",
"region:us"
] | object-detection | "2024-06-16T15:24:53Z" | ---
pipeline_tag: object-detection
--- |
CLASS-MATE/Llama3-8b-dataset2 | CLASS-MATE | "2024-06-16T15:26:29Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-06-16T15:25:59Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ashu2000/Llama-2-7b-chat-finetune | ashu2000 | "2024-06-16T16:07:39Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-06-16T15:26:22Z" | ---
license: apache-2.0
---
|
nope13456/egro | nope13456 | "2024-06-16T15:28:19Z" | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | "2024-06-16T15:27:45Z" | ---
license: mit
---
|
africa3939/sd3-medium | africa3939 | "2024-06-16T15:29:08Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-16T15:29:08Z" | Entry not found |
subhasishtech88/lama_fine_tune_lora_model_1 | subhasishtech88 | "2024-06-16T15:33:11Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-16T15:33:02Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** subhasishtech88
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Xiaolihai/BioGPT-Large_MeDistill_28_BioGPT-Large_ep10 | Xiaolihai | "2024-06-16T15:36:14Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-16T15:36:14Z" | Entry not found |
V3N0M/Qwen-Jenna-v01 | V3N0M | "2024-06-16T15:39:05Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"base_model:unsloth/qwen2-0.5b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-16T15:39:02Z" | ---
base_model: unsloth/qwen2-0.5b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
---
# Uploaded model
- **Developed by:** V3N0M
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2-0.5b-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
SogoChang/distilbert-base-uncased-finetuned-imdb | SogoChang | "2024-06-16T15:39:15Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-16T15:39:15Z" | Entry not found |
CarelS/gpt2-wikitext2 | CarelS | "2024-06-16T15:42:34Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-06-16T15:39:29Z" | Entry not found |
ehristoforu/dpo-spo-loras | ehristoforu | "2024-06-16T15:55:48Z" | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2024-06-16T15:41:11Z" | ---
license: creativeml-openrail-m
---
|
SilvioLima/absa_treinamento_0 | SilvioLima | "2024-06-17T19:18:03Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-06-16T15:46:04Z" |
# Model Card para ABSA_AOTE_distilGPT2
## Dados Gerais
- **Nome:** Modelo para Aspect-opinion Triplet Extraction (AOTE) baseado em distilGPT2
- **Tipo:** decoder-only
- **Licenรงa:** [Licenรงa do modelo]
- **Modelo base:** distilGPT2
## Resumo
Modelo distilGPT2 ajustado para a tarefa ABSA/AOTE com os datasets SemEval + Amazon.
Para treinamento foi utilizado PyTorch.
Parรขmetros:
| Parรขmetro | Valor | Descriรงรฃo |
| ------------- | ------------- | ------------- |
|model | distilGPT2 | Nome do modelo base |
|train_size | None | Nรบmero de amostras para treinameto |
|val_size | None | Nรบmero de amostras para validaรงรฃo |
|test_size | None | Nรบmero de amostras para teste |
|max_input_length | 128 | Quantidade de tokens mรกxima na entrada |
|max_output_length | 128 | Quantidade de tokens mรกxima na saรญda |
|batch_size | 16 | Quantidade de amostras no batch |
|n_epochs | 10 | Nรบmero mรกximo de รฉpocas de treinamento |
|lr | 1,00E-03 | Taxa de aprendizado |
|use_weights | FALSO | Usar ou nรฃo pesos personalizados para cada polaridade |
|use_paraphrase | FALSO | Usar ou nรฃo a saรญda no formato de parรกfrase |
|use_prompt | FALSO | Usar uma instruรงรฃo junto com o review na entrada |
|one_shot | FALSO | Fornecer ou nรฃo um exemplo junto ao prompt |
|early_stop | 3 | Paciรชncia do early stop (se a perda de validaรงรฃo nรฃo abaixar por trรชs รฉpocas o treinamento encerra) |
## Utilizaรงรฃo Pretendida
O modelo foi ajustado considerando o formato de entrada e saรญda descrito abaixo, sendo assim recomenda-se que ao se carregar, fazer inferรชncias com dados seguindo o mesmo formato.
Entrada: The pizza was good, but the waiter was lazy.
Saรญda: [('pizza', 'good', 'POS'), ('waiter', 'lazy', 'NEG')]
## Idiomas
Inglรชs
## Dados de Treinamento
Os dados sรฃo uma composiรงรฃo dos datasets ASTE de [1] e DM-ASTE [2], que seguem o mesmo formato de dados descrito acima.
[1] XU, Lu et al. Position-aware tagging for aspect sentiment triplet extraction. arXiv preprint arXiv:2010.02609, 2020.
[2] XU, Ting et al. Measuring Your ASTE Models in The Wild: A Diversified Multi-domain Dataset For Aspect Sentiment Triplet Extraction. arXiv preprint arXiv:2305.17448, 2023.
|
HareRamaCh/model-finetuned | HareRamaCh | "2024-06-16T15:48:40Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-16T15:48:40Z" | Entry not found |
JamesKim/m2m100-ft3 | JamesKim | "2024-06-16T15:50:56Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"m2m_100",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2024-06-16T15:49:40Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
pablovela5620/dsine_kappa | pablovela5620 | "2024-06-16T17:10:30Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-16T15:54:08Z" | Entry not found |
AiHubber/CatRave990 | AiHubber | "2024-06-16T15:55:31Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2024-06-16T15:55:03Z" | ---
license: openrail
---
|
sgarcianicito/ubi | sgarcianicito | "2024-06-16T15:56:51Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2024-06-16T15:56:51Z" | ---
license: openrail
---
|
BarBossHk/egg | BarBossHk | "2024-06-16T15:59:15Z" | 0 | 0 | null | [
"license:afl-3.0",
"region:us"
] | null | "2024-06-16T15:59:15Z" | ---
license: afl-3.0
---
|
ckazotronsyka/worke | ckazotronsyka | "2024-06-16T15:59:37Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-16T15:59:37Z" | Entry not found |
Mohammed-majeed/llama-3-8b-bnb-4bit-Unsloth-chunk-7-0.5-1 | Mohammed-majeed | "2024-06-16T16:05:40Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-16T16:03:46Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** Mohammed-majeed
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
audo/Lumina-T2Music | audo | "2024-06-16T16:10:00Z" | 0 | 0 | transformers | [
"transformers",
"text-to-audio",
"music",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-to-audio | "2024-06-16T16:05:32Z" | ---
license: apache-2.0
tags:
- text-to-audio
- music
library_name: transformers
---
# Lumina Text-to-Music
We will provide our implementation and pretrained models as open source in this repository recently.
- Generation Model: Flag-DiT
- Text Encoder: [FLAN-T5-Large](https://huggingface.co/google/flan-t5-large)
- VAE: Make an Audio 2, finetuned from [Makee an Audio](https://github.com/Text-to-Audio/Make-An-Audio)
- Decoder: [Vocoder](https://github.com/NVIDIA/BigVGAN)
## ๐ฐ News
- [2024-06-07] ๐๐๐ We release the initial version of `Lumina-T2Music` for text-to-music generation.
## Installation
Before installation, ensure that you have a working ``nvcc``
```bash
# The command should work and show the same version number as in our case. (12.1 in our case).
nvcc --version
```
On some outdated distros (e.g., CentOS 7), you may also want to check that a late enough version of
``gcc`` is available
```bash
# The command should work and show a version of at least 6.0.
# If not, consult distro-specific tutorials to obtain a newer version or build manually.
gcc --version
```
Downloading Lumina-T2X repo from github:
```bash
git clone https://github.com/Alpha-VLLM/Lumina-T2X
```
### 1. Create a conda environment and install PyTorch
Note: You may want to adjust the CUDA version [according to your driver version](https://docs.nvidia.com/deploy/cuda-compatibility/#default-to-minor-version).
```bash
conda create -n Lumina_T2X -y
conda activate Lumina_T2X
conda install python=3.11 pytorch==2.1.0 torchvision==0.16.0 torchaudio==2.1.0 pytorch-cuda=12.1 -c pytorch -c nvidia -y
```
### 2. Install dependencies
>[!Warning]
> The environment dependencies for Lumina-T2Music are different from those for Lumina-T2I. Please install the appropriate environment.
Installing `Lumina-T2Music` dependencies:
```bash
cd .. # If you are in the `lumina_music` directory, execute this line.
pip install -e ".[music]"
```
or you can use `requirements.txt` to install the environment.
```bash
cd lumina_music # If you are not in the `lumina_music` folder, run this line.
pip install -r requirements.txt
```
### 3. Install ``flash-attn``
```bash
pip install flash-attn --no-build-isolation
```
### 4. Install [nvidia apex](https://github.com/nvidia/apex) (optional)
>[!Warning]
> While Apex can improve efficiency, it is *not* a must to make Lumina-T2X work.
>
> Note that Lumina-T2X works smoothly with either:
> + Apex not installed at all; OR
> + Apex successfully installed with CUDA and C++ extensions.
>
> However, it will fail when:
> + A Python-only build of Apex is installed.
>
> If the error `No module named 'fused_layer_norm_cuda'` appears, it typically means you are using a Python-only build of Apex. To resolve this, please run `pip uninstall apex`, and Lumina-T2X should then function correctly.
You can clone the repo and install following the official guidelines (note that we expect a full
build, i.e., with CUDA and C++ extensions)
```bash
pip install ninja
git clone https://github.com/NVIDIA/apex
cd apex
# if pip >= 23.1 (ref: https://pip.pypa.io/en/stable/news/#v23-1) which supports multiple `--config-settings` with the same key...
pip install -v --disable-pip-version-check --no-cache-dir --no-build-isolation --config-settings "--build-option=--cpp_ext" --config-settings "--build-option=--cuda_ext" ./
# otherwise
pip install -v --disable-pip-version-check --no-cache-dir --no-build-isolation --global-option="--cpp_ext" --global-option="--cuda_ext" ./
```
## Inference
### Preparation
Prepare the pretrained checkpoints.
โญโญ (Recommended) you can use `huggingface-cli` downloading our model:
```bash
huggingface-cli download --resume-download Alpha-VLLM/Lumina-T2Music --local-dir /path/to/ckpt
```
or using git for cloning the model you want to use:
```bash
git clone https://huggingface.co/Alpha-VLLM/Lumina-T2Music
```
### Web Demo
To host a local gradio demo for interactive inference, run the following command:
1. updated `AutoencoderKL` ckpt path
you should update `configs/lumina-text2music.yaml` to set `AutoencoderKL` checkpoint path. Please replace `/path/to/ckpt` with the path where your checkpoints are located (<real_path>).
```diff
...
depth: 16
max_len: 1000
first_stage_config:
target: models.autoencoder1d.AutoencoderKL
params:
embed_dim: 20
monitor: val/rec_loss
- ckpt_path: /path/to/ckpt/maa2/maa2.ckpt
+ ckpt_path: <real_path>/maa2/maa2.ckpt
ddconfig:
double_z: true
in_channels: 80
out_ch: 80
...
```
2. setting `Lumina-T2Music` and `Vocoder` checkpoint path and run demo
Please replace `/path/to/ckpt` with the actual downloaded path.
```bash
# `/path/to/ckpt` should be a directory containing `music_generation`, `maa2`, and `bigvnat`.
# default
python -u demo_music.py \
--ckpt "/path/to/ckpt/music_generation" \
--vocoder_ckpt "/path/to/ckpt/bigvnat" \
--config_path "configs/lumina-text2music.yaml" \
--sample_rate 16000
```
## Disclaimer
Any organization or individual is prohibited from using any technology mentioned in this paper to generate someone's speech without his/her consent, including but not limited to government leaders, political figures, and celebrities. If you do not comply with this item, you could be in violation of copyright laws. |
EmbeddedLLM/llama-2-13b-chat-int4-onnx-directml | EmbeddedLLM | "2024-06-17T15:33:47Z" | 0 | 0 | transformers | [
"transformers",
"onnx",
"llama",
"text-generation",
"facebook",
"meta",
"llama-2",
"ONNX",
"DirectML",
"DML",
"conversational",
"ONNXRuntime",
"custom_code",
"en",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-06-16T16:09:53Z" | ---
license: llama2
language:
- en
pipeline_tag: text-generation
tags:
- facebook
- meta
- llama
- llama-2
- ONNX
- DirectML
- DML
- conversational
- ONNXRuntime
- custom_code
---
# Llama-2-13b-chat ONNX models for DirectML
This repository hosts the optimized versions of [meta-llama/Llama-2-13b-chat-hf](https://huggingface.co/meta-llama/Llama-2-13b-chat-hf) to accelerate inference with ONNX Runtime for DirectML.
## Usage on Windows (Intel / AMD / Nvidia / Qualcomm)
```powershell
conda create -n onnx python=3.10
conda activate onnx
winget install -e --id GitHub.GitLFS
pip install huggingface-hub[cli]
huggingface-cli download EmbeddedLLM/llama-2-13b-chat-int4-onnx-directml --local-dir .\llama-2-13b-chat
pip install numpy==1.26.4
Invoke-WebRequest -Uri "https://raw.githubusercontent.com/microsoft/onnxruntime-genai/main/examples/python/phi3-qa.py" -OutFile "phi3-qa.py"
pip install onnxruntime-directml
pip install --pre onnxruntime-genai-directml
conda install conda-forge::vs2015_runtime
python phi3-qa.py -m .\llama-2-13b-chat
```
## What is DirectML
DirectML is a high-performance, hardware-accelerated DirectX 12 library for machine learning. DirectML provides GPU acceleration for common machine learning tasks across a broad range of supported hardware and drivers, including all DirectX 12-capable GPUs from vendors such as AMD, Intel, NVIDIA, and Qualcomm. |
pookie3000/trump_lora | pookie3000 | "2024-06-16T16:13:41Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-06-16T16:11:17Z" | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
richardkelly/Qwen-Qwen1.5-7B-1718554398 | richardkelly | "2024-06-16T16:13:28Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen1.5-7B",
"region:us"
] | null | "2024-06-16T16:13:18Z" | ---
library_name: peft
base_model: Qwen/Qwen1.5-7B
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.11.1 |
Darkknight12/Pytorch_Mnist_Model | Darkknight12 | "2024-06-16T16:17:09Z" | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | "2024-06-16T16:14:57Z" | ---
license: mit
---
|
marcossoaresgg/zhline | marcossoaresgg | "2024-06-16T16:16:16Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2024-06-16T16:15:33Z" | ---
license: openrail
---
|
vivym/face-parsing-bisenet | vivym | "2024-06-16T16:18:49Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-16T16:15:53Z" | # Face Parsing BiSeNet
[https://github.com/zllrunning/face-parsing.PyTorch](https://github.com/zllrunning/face-parsing.PyTorch)
|
gwong001/my_awesome_model | gwong001 | "2024-06-16T16:16:16Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-16T16:16:16Z" | Entry not found |
whizzzzkid/G_80000 | whizzzzkid | "2024-06-16T16:17:32Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-16T16:17:00Z" | Entry not found |
Wenrui/ML_TTS_Dataset | Wenrui | "2024-06-30T19:24:32Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-16T16:20:35Z" | # ML_TTS_Dataset
## Pipeline
1. ๆไปถ้ๅฝๅ
* ML_TTS_Dataset/examples/bash/rename/run_single_split.sh
2. Convert to 16kHZ
* bash ML_TTS_Dataset/examples/bash/resample/run_single_dir.sh
ๆๅฎ่พๅบ็้ณ้ขๆ ผๅผๅ้ๆ ท็
3. ๆฃๆฅ้้ณๅบ๏ผ่ฎพ็ฝฎ้ๅผๅป้คๅธฆ่ๆฏๅฃฐ่ง้ข
* bash ML_TTS_Dataset/examples/bash/noise_suppression
ๅฏ่ทณ่ฟ็ฌฌ2ๆญฅ๏ผ็ดๆฅๅฐ็ฌฌ3ๆญฅ๏ผไธ้่ฆ้ขๅค็่ฝฌๆข้ๆ ท็ใ
4. ๅค่ฏด่ฏไบบๆฃๆต๏ผๆๅผๆๅค่ฏด่ฏไบบ็้ณ้ขใ
* bash ML_TTS_Dataset/examples/bash/speaker_diarization/run_audio_root.sh
5. ASR(with duraction)
* bash ML_TTS_Dataset/examples/bash/asr/run_single_split.sh
6. clips cutting(ๆ็
งduraction ่ฃๅชไธบ3s-10s็ๆฎต)
ไธไธๆญฅๅพๅฐasr่พๅบ็ๆถ้ดๆณ๏ผๅฏๆ็
งๆถ้ดๆณๅช่พ้ณ้ขใ่ฟไธๆญฅไผ็ธๅฝๅ ็จ็ฝ็ปๅธฆๅฎฝ๏ผๆๆๆฐๆฎ้ฝๅญๅจ็ฝ็ไธ๏ผ๏ผๅปบ่ฎฎไบค็ป้ฃๆฌๆฅๅจๆฌๅฐๅช่พใ
7. ๆ็ปๆๆ่งML_TTS_Dataset/examples/demo.txt |
Xiaolihai/BioMistral-7B_MeDistill_28_BioGPT-Large_ep10 | Xiaolihai | "2024-06-16T16:22:18Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-16T16:22:18Z" | Entry not found |
alru28/trained-sd3-lora | alru28 | "2024-06-16T16:23:56Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-16T16:23:56Z" | Entry not found |
ElectricIceBird/ppo-Huggy | ElectricIceBird | "2024-06-16T16:25:10Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-16T16:25:10Z" | Entry not found |
shalexxxy/my_t5_small_test | shalexxxy | "2024-06-18T11:23:10Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | "2024-06-16T16:26:07Z" | Entry not found |
KYAGABA/wav2vec2-large-xls-r-300m-rw-1hr-v1 | KYAGABA | "2024-06-17T15:26:24Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice_5_1",
"base_model:facebook/wav2vec2-xls-r-300m",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-06-16T16:27:08Z" | ---
license: apache-2.0
base_model: facebook/wav2vec2-xls-r-300m
tags:
- generated_from_trainer
datasets:
- common_voice_5_1
metrics:
- wer
model-index:
- name: wav2vec2-large-xls-r-300m-rw-1hr-v1
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: common_voice_5_1
type: common_voice_5_1
config: rw
split: test
args: rw
metrics:
- name: Wer
type: wer
value: 0.9068557919621749
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-rw-1hr-v1
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice_5_1 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1842
- Wer: 0.9069
- Cer: 0.2771
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 40
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-------:|:----:|:---------------:|:------:|:------:|
| 8.8463 | 5.2632 | 100 | 4.4056 | 1.0 | 1.0 |
| 3.2194 | 10.5263 | 200 | 3.1877 | 1.0 | 1.0 |
| 2.9338 | 15.7895 | 300 | 2.9724 | 1.0 | 1.0 |
| 2.7275 | 21.0526 | 400 | 2.5030 | 1.0 | 0.7623 |
| 1.1143 | 26.3158 | 500 | 1.2838 | 0.9378 | 0.3333 |
| 0.4144 | 31.5789 | 600 | 1.2007 | 0.9099 | 0.2962 |
| 0.2425 | 36.8421 | 700 | 1.1657 | 0.9040 | 0.2815 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
Fearless-15/Apex | Fearless-15 | "2024-06-16T16:29:25Z" | 0 | 0 | null | [
"license:other",
"region:us"
] | null | "2024-06-16T16:29:25Z" | ---
license: other
license_name: dev
license_link: LICENSE
---
|
richardkelly/google-gemma-2b-1718555510 | richardkelly | "2024-06-16T16:32:09Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:google/gemma-2b",
"region:us"
] | null | "2024-06-16T16:31:50Z" | ---
library_name: peft
base_model: google/gemma-2b
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.11.1 |
utkukose/deneme | utkukose | "2024-06-16T16:35:14Z" | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | "2024-06-16T16:35:14Z" | ---
license: mit
---
|
Bucino/llnn | Bucino | "2024-06-16T16:40:01Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2024-06-16T16:37:02Z" | ---
license: openrail
---
|
callmesan/audio-abuse-feature | callmesan | "2024-06-16T16:41:57Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"whisper",
"audio-classification",
"generated_from_trainer",
"base_model:HariprasathSB/indic-whisper-vulnerable",
"endpoints_compatible",
"region:us"
] | audio-classification | "2024-06-16T16:41:23Z" | ---
base_model: HariprasathSB/indic-whisper-vulnerable
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: audio-abuse-feature
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# audio-abuse-feature
This model is a fine-tuned version of [HariprasathSB/indic-whisper-vulnerable](https://huggingface.co/HariprasathSB/indic-whisper-vulnerable) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4489
- Accuracy: 0.8814
- Macro Precision: 0.8557
- Macro Recall: 0.8472
- Macro F1-score: 0.8513
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.01
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Macro Precision | Macro Recall | Macro F1-score |
|:-------------:|:------:|:----:|:---------------:|:--------:|:---------------:|:------------:|:--------------:|
| 0.4633 | 0.4367 | 50 | 0.3753 | 0.8327 | 0.8321 | 0.8314 | 0.8317 |
| 0.345 | 0.8734 | 100 | 0.4170 | 0.8241 | 0.8612 | 0.8126 | 0.8150 |
| 0.2592 | 1.3100 | 150 | 0.3357 | 0.8512 | 0.8506 | 0.8502 | 0.8504 |
| 0.2097 | 1.7467 | 200 | 0.3142 | 0.8758 | 0.8757 | 0.8744 | 0.8749 |
| 0.1545 | 2.1834 | 250 | 0.3551 | 0.8721 | 0.8713 | 0.8718 | 0.8715 |
| 0.0829 | 2.6201 | 300 | 0.3916 | 0.8795 | 0.8797 | 0.8778 | 0.8786 |
| 0.0944 | 3.0568 | 350 | 0.4137 | 0.8721 | 0.8714 | 0.8730 | 0.8718 |
| 0.0416 | 3.4934 | 400 | 0.5350 | 0.8659 | 0.8677 | 0.8631 | 0.8646 |
| 0.0469 | 3.9301 | 450 | 0.5129 | 0.8733 | 0.8727 | 0.8726 | 0.8727 |
| 0.0247 | 4.3668 | 500 | 0.5543 | 0.8708 | 0.8713 | 0.8689 | 0.8698 |
| 0.0208 | 4.8035 | 550 | 0.5611 | 0.8696 | 0.8691 | 0.8688 | 0.8689 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.1.2
- Datasets 2.19.2
- Tokenizers 0.19.1
|
marcosprun/rociomedina | marcosprun | "2024-07-01T01:28:26Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-16T16:42:07Z" | Entry not found |
hishamcse/Reinforce-Pixelcopter-PLE-v0 | hishamcse | "2024-06-16T18:25:42Z" | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | "2024-06-16T16:42:32Z" | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 97.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
moschouChry/chronos-t5-finetuned_tiny_1-Patient0-fine-tuned_20240616_194355 | moschouChry | "2024-06-16T16:47:27Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | "2024-06-16T16:45:56Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
moschouChry/chronos-t5-finetuned_tiny_1-Patient0-fine-tuned_20240616_194735 | moschouChry | "2024-06-16T16:49:42Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-16T16:49:42Z" | Entry not found |
whizzzzkid/ft_G_0050000 | whizzzzkid | "2024-06-18T04:32:44Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-16T16:50:35Z" | Entry not found |
whizzzzkid/ft_G_030000 | whizzzzkid | "2024-06-16T16:52:34Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-16T16:52:01Z" | Entry not found |
Xiaolihai/BioMistral-7B_MeDistill_28_BioMistral-7B_ep10 | Xiaolihai | "2024-06-16T16:52:13Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-16T16:52:13Z" | Entry not found |
Richdog89/Dog | Richdog89 | "2024-06-16T16:58:00Z" | 0 | 0 | null | [
"ae",
"dataset:HuggingFaceFW/fineweb",
"license:artistic-2.0",
"region:us"
] | null | "2024-06-16T16:55:52Z" | ---
license: artistic-2.0
datasets:
- HuggingFaceFW/fineweb
language:
- ae
--- |
microzen/Qwen2-1.5b-lora | microzen | "2024-06-16T17:03:48Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-06-16T16:56:26Z" | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ksridhar/atari_2B_atari_carnival_1111 | ksridhar | "2024-06-16T16:59:36Z" | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2024-06-16T16:57:38Z" | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: atari_carnival
type: atari_carnival
metrics:
- type: mean_reward
value: 718.00 +/- 546.29
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **atari_carnival** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
|
scholl99/tinyllama_humanMOD_qlora_v2 | scholl99 | "2024-06-16T16:58:34Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/tinyllama-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-16T16:58:16Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/tinyllama-bnb-4bit
---
# Uploaded model
- **Developed by:** scholl99
- **License:** apache-2.0
- **Finetuned from model :** unsloth/tinyllama-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
nqv2291/bloom_560m-sft-open_ner_en | nqv2291 | "2024-06-16T16:58:22Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-16T16:58:22Z" | Entry not found |
ksridhar/atari_2B_atari_pooyan_1111 | ksridhar | "2024-06-16T17:01:41Z" | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2024-06-16T17:00:24Z" | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: atari_pooyan
type: atari_pooyan
metrics:
- type: mean_reward
value: 333.50 +/- 174.87
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **atari_pooyan** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/ |
ksridhar/atari_2B_atari_airraid_1111 | ksridhar | "2024-06-16T17:03:43Z" | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2024-06-16T17:02:54Z" | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: atari_airraid
type: atari_airraid
metrics:
- type: mean_reward
value: 465.00 +/- 182.76
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **atari_airraid** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
|
danielgi97/stable-diffusion-2-1 | danielgi97 | "2024-06-16T17:03:12Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-16T17:03:12Z" | Entry not found |
ksridhar/atari_2B_atari_journeyescape_1111 | ksridhar | "2024-06-16T17:06:04Z" | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2024-06-16T17:04:48Z" | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: atari_journeyescape
type: atari_journeyescape
metrics:
- type: mean_reward
value: -21220.00 +/- 7108.14
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **atari_journeyescape** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
|
kmate97/HeikoGrauel2_TITAN | kmate97 | "2024-06-16T17:05:31Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-16T17:05:23Z" | Entry not found |
gguille/lista | gguille | "2024-06-16T17:11:38Z" | 0 | 0 | null | [
"license:gpl-2.0",
"region:us"
] | null | "2024-06-16T17:11:38Z" | ---
license: gpl-2.0
---
|
wolffenbuetell/MERGERPLUSLORA17 | wolffenbuetell | "2024-06-16T17:15:33Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2024-06-16T17:11:50Z" | ---
license: openrail
---
|
ckpt/Lumina-Next-SFT | ckpt | "2024-06-16T17:18:40Z" | 0 | 0 | null | [
"text-to-image",
"safetensors",
"arxiv:2405.05945",
"license:apache-2.0",
"region:us"
] | text-to-image | "2024-06-16T17:14:22Z" | ---
license: apache-2.0
tags:
- text-to-image
- safetensors
---
# Lumina-Next-SFT
The `Lumina-Next-SFT` is a Next-DiT model containing 2B parameters and utilizes [Gemma-2B](https://huggingface.co/google/gemma-2b) as the text encoder, enhanced through high-quality supervised fine-tuning (SFT).
Our generative model has `Next-DiT` as the backbone, the text encoder is the `Gemma` 2B model, and the VAE uses a version of `sdxl` fine-tuned by stabilityai.
- Generation Model: Next-DiT
- Text Encoder: [Gemma-2B](https://huggingface.co/google/gemma-2b)
- VAE: [stabilityai/sdxl-vae](https://huggingface.co/stabilityai/sdxl-vae)
[paper](https://arxiv.org/abs/2405.05945)
## ๐ฐ News
- [2024-06-08] ๐๐๐ We have released the `Lumina-Next-SFT` model.
- [2024-05-28] We updated the `Lumina-Next-T2I` model to support 2K Resolution image generation.
- [2024-05-16] We have converted the `.pth` weights to `.safetensors` weights. Please pull the latest code to use `demo.py` for inference.
- [2024-05-12] We release the next version of `Lumina-T2I`, called `Lumina-Next-T2I` for faster and lower memory usage image generation model.
## ๐ฎ Model Zoo
More checkpoints of our model will be released soon~
| Resolution | Next-DiT Parameter| Text Encoder | Prediction | Download URL |
| ---------- | ----------------------- | ------------ | -----------|-------------- |
| 1024 | 2B | [Gemma-2B](https://huggingface.co/google/gemma-2b) | Rectified Flow | [hugging face](https://huggingface.co/Alpha-VLLM/Lumina-Next-SFT) |
## Installation
Before installation, ensure that you have a working ``nvcc``
```bash
# The command should work and show the same version number as in our case. (12.1 in our case).
nvcc --version
```
On some outdated distros (e.g., CentOS 7), you may also want to check that a late enough version of
``gcc`` is available
```bash
# The command should work and show a version of at least 6.0.
# If not, consult distro-specific tutorials to obtain a newer version or build manually.
gcc --version
```
Downloading Lumina-T2X repo from GitHub:
```bash
git clone https://github.com/Alpha-VLLM/Lumina-T2X
```
### 1. Create a conda environment and install PyTorch
Note: You may want to adjust the CUDA version [according to your driver version](https://docs.nvidia.com/deploy/cuda-compatibility/#default-to-minor-version).
```bash
conda create -n Lumina_T2X -y
conda activate Lumina_T2X
conda install python=3.11 pytorch==2.1.0 torchvision==0.16.0 torchaudio==2.1.0 pytorch-cuda=12.1 -c pytorch -c nvidia -y
```
### 2. Install dependencies
```bash
pip install diffusers fairscale accelerate tensorboard transformers gradio torchdiffeq click
```
or you can use
```bash
cd lumina_next_t2i
pip install -r requirements.txt
```
### 3. Install ``flash-attn``
```bash
pip install flash-attn --no-build-isolation
```
### 4. Install [nvidia apex](https://github.com/nvidia/apex) (optional)
>[!Warning]
> While Apex can improve efficiency, it is *not* a must to make Lumina-T2X work.
>
> Note that Lumina-T2X works smoothly with either:
> + Apex not installed at all; OR
> + Apex successfully installed with CUDA and C++ extensions.
>
> However, it will fail when:
> + A Python-only build of Apex is installed.
>
> If the error `No module named 'fused_layer_norm_cuda'` appears, it typically means you are using a Python-only build of Apex. To resolve this, please run `pip uninstall apex`, and Lumina-T2X should then function correctly.
You can clone the repo and install following the official guidelines (note that we expect a full
build, i.e., with CUDA and C++ extensions)
```bash
pip install ninja
git clone https://github.com/NVIDIA/apex
cd apex
# if pip >= 23.1 (ref: https://pip.pypa.io/en/stable/news/#v23-1) which supports multiple `--config-settings` with the same key...
pip install -v --disable-pip-version-check --no-cache-dir --no-build-isolation --config-settings "--build-option=--cpp_ext" --config-settings "--build-option=--cuda_ext" ./
# otherwise
pip install -v --disable-pip-version-check --no-cache-dir --no-build-isolation --global-option="--cpp_ext" --global-option="--cuda_ext" ./
```
## Inference
To ensure that our generative model is ready to use right out of the box, we provide a user-friendly CLI program and a locally deployable Web Demo site.
### CLI
1. Install Lumina-Next-T2I
```bash
pip install -e .
```
2. Prepare the pre-trained model
โญโญ (Recommended) you can use huggingface_cli to download our model:
```bash
huggingface-cli download --resume-download Alpha-VLLM/Lumina-Next-T2I --local-dir /path/to/ckpt
```
or using git for cloning the model you want to use:
```bash
git clone https://huggingface.co/Alpha-VLLM/Lumina-Next-T2I
```
1. Setting your personal inference configuration
Update your own personal inference settings to generate different styles of images, checking `config/infer/config.yaml` for detailed settings. Detailed config structure:
> `/path/to/ckpt` should be a directory containing `consolidated*.pth` and `model_args.pth`
```yaml
- settings:
model:
ckpt: "/path/to/ckpt" # if ckpt is "", you should use `--ckpt` for passing model path when using `lumina` cli.
ckpt_lm: "" # if ckpt is "", you should use `--ckpt_lm` for passing model path when using `lumina` cli.
token: "" # if LLM is a huggingface gated repo, you should input your access token from huggingface and when token is "", you should `--token` for accessing the model.
transport:
path_type: "Linear" # option: ["Linear", "GVP", "VP"]
prediction: "velocity" # option: ["velocity", "score", "noise"]
loss_weight: "velocity" # option: [None, "velocity", "likelihood"]
sample_eps: 0.1
train_eps: 0.2
ode:
atol: 1e-6 # Absolute tolerance
rtol: 1e-3 # Relative tolerance
reverse: false # option: true or false
likelihood: false # option: true or false
infer:
resolution: "1024x1024" # option: ["1024x1024", "512x2048", "2048x512", "(Extrapolation) 1664x1664", "(Extrapolation) 1024x2048", "(Extrapolation) 2048x1024"]
num_sampling_steps: 60 # range: 1-1000
cfg_scale: 4. # range: 1-20
solver: "euler" # option: ["euler", "dopri5", "dopri8"]
t_shift: 4 # range: 1-20 (int only)
ntk_scaling: true # option: true or false
proportional_attn: true # option: true or false
seed: 0 # rnage: any number
```
- model:
- `ckpt`: lumina-next-t2i checkpoint path from [huggingface repo](https://huggingface.co/Alpha-VLLM/Lumina-Next-T2I) containing `consolidated*.pth` and `model_args.pth`.
- `ckpt_lm`: LLM checkpoint.
- `token`: huggingface access token for accessing gated repo.
- transport:
- `path_type`: the type of path for transport: 'Linear', 'GVP' (Geodesic Vector Pursuit), or 'VP' (Vector Pursuit).
- `prediction`: the prediction model for the transport dynamics.
- `loss_weight`: the weighting of different components in the loss function, can be 'velocity' for dynamic modeling, 'likelihood' for statistical consistency, or None for no weighting
- `sample_eps`: sampling in the transport model.
- `train_eps`: training to stabilize the learning process.
- ode:
- `atol`: Absolute tolerance for the ODE solver. (options: ["Linear", "GVP", "VP"])
- `rtol`: Relative tolerance for the ODE solver. (option: ["velocity", "score", "noise"])
- `reverse`: run the ODE solver in reverse. (option: [None, "velocity", "likelihood"])
- `likelihood`: Enable calculation of likelihood during the ODE solving process.
- infer
- `resolution`: generated image resolution.
- `num_sampling_steps`: sampling step for generating image.
- `cfg_scale`: classifier-free guide scaling factor
- `solver`: solver for image generation.
- `t_shift`: time shift factor.
- `ntk_scaling`: ntk rope scaling factor.
- `proportional_attn`: Whether to use proportional attention.
- `seed`: random initialization seeds.
1. Run with CLI
inference command:
```bash
lumina_next infer -c <config_path> <caption_here> <output_dir>
```
e.g. Demo command:
```bash
cd lumina_next_t2i
lumina_next infer -c "config/infer/settings.yaml" "a snowman of ..." "./outputs"
```
### Web Demo
To host a local gradio demo for interactive inference, run the following command:
```bash
# `/path/to/ckpt` should be a directory containing `consolidated*.pth` and `model_args.pth`
# default
python -u demo.py --ckpt "/path/to/ckpt"
# the demo by default uses bf16 precision. to switch to fp32:
python -u demo.py --ckpt "/path/to/ckpt" --precision fp32
# use ema model
python -u demo.py --ckpt "/path/to/ckpt" --ema
``` |
h34i7cby47t/modelf | h34i7cby47t | "2024-06-16T17:22:38Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-16T17:22:38Z" | Entry not found |
HourunLi/BGE3-research | HourunLi | "2024-06-16T17:23:27Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2024-06-16T17:23:27Z" | ---
license: apache-2.0
---
|