modelId
stringlengths 5
122
| author
stringlengths 2
42
| last_modified
unknown | downloads
int64 0
738M
| likes
int64 0
11k
| library_name
stringclasses 245
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 48
values | createdAt
unknown | card
stringlengths 1
901k
|
---|---|---|---|---|---|---|---|---|---|
AriaRahmati1/22ghesmat1part1 | AriaRahmati1 | "2024-06-13T22:07:41Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2024-06-13T20:41:30Z" | ---
license: openrail
---
|
elee25/taxi-v3 | elee25 | "2024-06-13T21:02:14Z" | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2024-06-13T20:41:42Z" | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="elee25/taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
LarryAIDraw/lightXL | LarryAIDraw | "2024-06-13T20:45:47Z" | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2024-06-13T20:44:43Z" | ---
license: creativeml-openrail-m
---
https://civitai.com/models/131389/pony-xl-and-15-neolight-background-and-lightning?modelVersionId=393981 |
LarryAIDraw/detailed_notrigger | LarryAIDraw | "2024-06-13T20:48:40Z" | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2024-06-13T20:45:03Z" | ---
license: creativeml-openrail-m
---
https://civitai.com/models/229213/extremely-detailed-no-trigger-slidersntcaixyz?modelVersionId=383563 |
LarryAIDraw/nobrav1_SDXL | LarryAIDraw | "2024-06-13T20:48:49Z" | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2024-06-13T20:45:27Z" | ---
license: creativeml-openrail-m
---
https://civitai.com/models/137296/no-bra-sdxl?modelVersionId=151506 |
Frixi/Feid_2024 | Frixi | "2024-06-13T20:46:38Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2024-06-13T20:46:22Z" | ---
license: openrail
---
|
Amirsilent2001/Amirai | Amirsilent2001 | "2024-06-13T20:47:49Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-13T20:47:49Z" | Entry not found |
marianbasti/ComunicacionesBCRA | marianbasti | "2024-06-15T17:59:15Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-13T20:49:18Z" | Entry not found |
majidmvulle/learn_ml | majidmvulle | "2024-06-13T20:50:09Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-13T20:50:09Z" | Entry not found |
LarryAIDraw/d3c4yXLP | LarryAIDraw | "2024-06-13T20:54:53Z" | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2024-06-13T20:52:29Z" | ---
license: creativeml-openrail-m
---
https://civitai.com/models/454079?modelVersionId=505554 |
LarryAIDraw/is_pretty | LarryAIDraw | "2024-06-13T20:55:02Z" | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2024-06-13T20:52:52Z" | ---
license: creativeml-openrail-m
---
https://civitai.com/models/287548/is-pretty-sliders-ntcaixyz?modelVersionId=323408 |
LarryAIDraw/zPDXL2 | LarryAIDraw | "2024-06-13T20:55:12Z" | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2024-06-13T20:53:13Z" | ---
license: creativeml-openrail-m
---
https://civitai.com/models/332646/pony-pdxl-negative-embeddings?modelVersionId=509253 |
elifztunc/ChatBot | elifztunc | "2024-06-13T20:54:31Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2024-06-13T20:54:31Z" | ---
license: apache-2.0
---
|
ShiftAddLLM/Llama-2-70b-wbits3-acc | ShiftAddLLM | "2024-06-13T21:18:45Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-06-13T20:55:22Z" | Entry not found |
LarryAIDraw/clear | LarryAIDraw | "2024-06-13T20:58:29Z" | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2024-06-13T20:56:21Z" | ---
license: creativeml-openrail-m
---
https://civitai.com/models/279601/clear-deblur-sliders-ntcaixyz?modelVersionId=314907 |
LarryAIDraw/OverallDetailXL | LarryAIDraw | "2024-06-13T20:58:37Z" | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2024-06-13T20:56:40Z" | ---
license: creativeml-openrail-m
---
https://civitai.com/models/98259/detail?modelVersionId=539032 |
LarryAIDraw/y_d | LarryAIDraw | "2024-06-13T20:58:46Z" | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2024-06-13T20:57:00Z" | ---
license: creativeml-openrail-m
---
https://civitai.com/models/413038/yd-orange-maru-lora-xl?modelVersionId=460322 |
dadu/universal-translator | dadu | "2024-06-27T21:26:03Z" | 0 | 0 | null | [
"safetensors",
"license:mit",
"region:us"
] | null | "2024-06-13T20:57:13Z" | ---
license: mit
---
|
Yuki20/alpaca3_8b_aci1 | Yuki20 | "2024-06-13T20:58:46Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-13T20:58:40Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** Yuki20
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
onizukal/Karma_3Class_RMSprop_1e4_20Epoch_Beit-large-224_fold1 | onizukal | "2024-06-13T22:43:13Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"beit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/beit-large-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2024-06-13T20:59:47Z" | ---
license: apache-2.0
base_model: microsoft/beit-large-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: Karma_3Class_RMSprop_1e4_20Epoch_Beit-large-224_fold1
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8316238448258353
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Karma_3Class_RMSprop_1e4_20Epoch_Beit-large-224_fold1
This model is a fine-tuned version of [microsoft/beit-large-patch16-224](https://huggingface.co/microsoft/beit-large-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7576
- Accuracy: 0.8316
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.5331 | 1.0 | 2469 | 0.5098 | 0.7932 |
| 0.367 | 2.0 | 4938 | 0.4616 | 0.8076 |
| 0.3223 | 3.0 | 7407 | 0.4300 | 0.8335 |
| 0.2322 | 4.0 | 9876 | 0.4848 | 0.8307 |
| 0.0915 | 5.0 | 12345 | 0.7576 | 0.8316 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.2
|
LarryAIDraw/rangiku-pdxl-nvwls-v1 | LarryAIDraw | "2024-06-13T21:04:22Z" | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2024-06-13T21:00:46Z" | ---
license: creativeml-openrail-m
---
https://civitai.com/models/489316/rangiku-matsumoto-bleach-sdxl-lora-pony-diffusion?modelVersionId=544100 |
LarryAIDraw/dishXL_JS2_lokr_V3236 | LarryAIDraw | "2024-06-13T21:04:32Z" | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2024-06-13T21:01:21Z" | ---
license: creativeml-openrail-m
---
https://civitai.com/models/403375/sdxl-artist-style-dishwasher1910?modelVersionId=449769 |
davis-etsy/digital_physical_classifier_test_1 | davis-etsy | "2024-06-13T21:01:38Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-13T21:01:38Z" | Entry not found |
LarryAIDraw/VividRealismColorEnhancer | LarryAIDraw | "2024-06-13T21:05:09Z" | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2024-06-13T21:02:11Z" | ---
license: creativeml-openrail-m
---
https://civitai.com/models/411536/vivid-realism-color-enhancer-ponyxl?modelVersionId=458702 |
DaveRave69/trained-sd3-lora | DaveRave69 | "2024-06-13T21:06:59Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-13T21:06:59Z" | Entry not found |
LC-CHANG/LLAMA3_8b_loraModel | LC-CHANG | "2024-06-13T21:08:53Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-06-13T21:07:59Z" | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
rafaeloc15/llama3-v5_q4km | rafaeloc15 | "2024-06-13T21:08:26Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-13T21:08:06Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** rafaeloc15
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
silent666/Qwen-Qwen1.5-7B-1718312911 | silent666 | "2024-06-13T21:08:32Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen1.5-7B",
"region:us"
] | null | "2024-06-13T21:08:31Z" | ---
library_name: peft
base_model: Qwen/Qwen1.5-7B
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.11.1 |
silent666/google-gemma-2b-1718312941 | silent666 | "2024-06-13T21:09:08Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:google/gemma-2b",
"region:us"
] | null | "2024-06-13T21:09:01Z" | ---
library_name: peft
base_model: google/gemma-2b
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.11.1 |
kacxx/mfcg-PDXL-V10 | kacxx | "2024-06-30T17:36:36Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-13T21:09:12Z" | Entry not found |
saifhmb/social-network-ads-logit-model | saifhmb | "2024-06-20T22:22:29Z" | 0 | 0 | sklearn | [
"sklearn",
"skops",
"tabular-classification",
"dataset:saifhmb/social-network-ads",
"region:us"
] | tabular-classification | "2024-06-13T21:10:32Z" | ---
library_name: sklearn
tags:
- sklearn
- skops
- tabular-classification
model_format: pickle
model_file: skops-b2ie2xry.pkl
widget:
- structuredData:
Age:
- -0.7989508220667412
- -0.021264850777441783
- -0.3128970900109291
EstimatedSalary:
- 0.4946075830589406
- -0.5773590622674106
- 0.14694272511525913
example_title: social-network-ads
datasets:
- saifhmb/social-network-ads
---
# Model description
This is a logistic regression classifier trained on social network ads dataset (https://huggingface.co/datasets/saifhmb/social-network-ads).
## Intended uses & limitations
[More Information Needed]
## Training Procedure
The preprocesing steps include using a train/test split ratio of 80/20 and applying feature scaling on all the features.
### Hyperparameters
<details>
<summary> Click to expand </summary>
| Hyperparameter | Value |
|-------------------|---------|
| C | 1.0 |
| class_weight | |
| dual | False |
| fit_intercept | True |
| intercept_scaling | 1 |
| l1_ratio | |
| max_iter | 100 |
| multi_class | auto |
| n_jobs | |
| penalty | l2 |
| random_state | |
| solver | lbfgs |
| tol | 0.0001 |
| verbose | 0 |
| warm_start | False |
</details>
### Model Plot
<style>#sk-container-id-18 {color: black;background-color: white;}#sk-container-id-18 pre{padding: 0;}#sk-container-id-18 div.sk-toggleable {background-color: white;}#sk-container-id-18 label.sk-toggleable__label {cursor: pointer;display: block;width: 100%;margin-bottom: 0;padding: 0.3em;box-sizing: border-box;text-align: center;}#sk-container-id-18 label.sk-toggleable__label-arrow:before {content: "▸";float: left;margin-right: 0.25em;color: #696969;}#sk-container-id-18 label.sk-toggleable__label-arrow:hover:before {color: black;}#sk-container-id-18 div.sk-estimator:hover label.sk-toggleable__label-arrow:before {color: black;}#sk-container-id-18 div.sk-toggleable__content {max-height: 0;max-width: 0;overflow: hidden;text-align: left;background-color: #f0f8ff;}#sk-container-id-18 div.sk-toggleable__content pre {margin: 0.2em;color: black;border-radius: 0.25em;background-color: #f0f8ff;}#sk-container-id-18 input.sk-toggleable__control:checked~div.sk-toggleable__content {max-height: 200px;max-width: 100%;overflow: auto;}#sk-container-id-18 input.sk-toggleable__control:checked~label.sk-toggleable__label-arrow:before {content: "▾";}#sk-container-id-18 div.sk-estimator input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-18 div.sk-label input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-18 input.sk-hidden--visually {border: 0;clip: rect(1px 1px 1px 1px);clip: rect(1px, 1px, 1px, 1px);height: 1px;margin: -1px;overflow: hidden;padding: 0;position: absolute;width: 1px;}#sk-container-id-18 div.sk-estimator {font-family: monospace;background-color: #f0f8ff;border: 1px dotted black;border-radius: 0.25em;box-sizing: border-box;margin-bottom: 0.5em;}#sk-container-id-18 div.sk-estimator:hover {background-color: #d4ebff;}#sk-container-id-18 div.sk-parallel-item::after {content: "";width: 100%;border-bottom: 1px solid gray;flex-grow: 1;}#sk-container-id-18 div.sk-label:hover label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-18 div.sk-serial::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 0;bottom: 0;left: 50%;z-index: 0;}#sk-container-id-18 div.sk-serial {display: flex;flex-direction: column;align-items: center;background-color: white;padding-right: 0.2em;padding-left: 0.2em;position: relative;}#sk-container-id-18 div.sk-item {position: relative;z-index: 1;}#sk-container-id-18 div.sk-parallel {display: flex;align-items: stretch;justify-content: center;background-color: white;position: relative;}#sk-container-id-18 div.sk-item::before, #sk-container-id-18 div.sk-parallel-item::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 0;bottom: 0;left: 50%;z-index: -1;}#sk-container-id-18 div.sk-parallel-item {display: flex;flex-direction: column;z-index: 1;position: relative;background-color: white;}#sk-container-id-18 div.sk-parallel-item:first-child::after {align-self: flex-end;width: 50%;}#sk-container-id-18 div.sk-parallel-item:last-child::after {align-self: flex-start;width: 50%;}#sk-container-id-18 div.sk-parallel-item:only-child::after {width: 0;}#sk-container-id-18 div.sk-dashed-wrapped {border: 1px dashed gray;margin: 0 0.4em 0.5em 0.4em;box-sizing: border-box;padding-bottom: 0.4em;background-color: white;}#sk-container-id-18 div.sk-label label {font-family: monospace;font-weight: bold;display: inline-block;line-height: 1.2em;}#sk-container-id-18 div.sk-label-container {text-align: center;}#sk-container-id-18 div.sk-container {/* jupyter's `normalize.less` sets `[hidden] { display: none; }` but bootstrap.min.css set `[hidden] { display: none !important; }` so we also need the `!important` here to be able to override the default hidden behavior on the sphinx rendered scikit-learn.org. See: https://github.com/scikit-learn/scikit-learn/issues/21755 */display: inline-block !important;position: relative;}#sk-container-id-18 div.sk-text-repr-fallback {display: none;}</style><div id="sk-container-id-18" class="sk-top-container" style="overflow: auto;"><div class="sk-text-repr-fallback"><pre>LogisticRegression()</pre><b>In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook. <br />On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.</b></div><div class="sk-container" hidden><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-18" type="checkbox" checked><label for="sk-estimator-id-18" class="sk-toggleable__label sk-toggleable__label-arrow">LogisticRegression</label><div class="sk-toggleable__content"><pre>LogisticRegression()</pre></div></div></div></div></div>
## Evaluation Results
| Metric | Value |
|-----------|----------|
| accuracy | 0.925 |
| precision | 0.944444 |
| recall | 0.772727 |
### Confusion Matrix
![Confusion Matrix](confusion_matrix.png)
# How to Get Started with the Model
[More Information Needed]
# Model Card Authors
This model card is written by following authors: Seifullah Bello
[More Information Needed]
# Model Card Contact
You can contact the model card authors through following channels:
[More Information Needed]
# Citation
Below you can find information related to citation.
**BibTeX:**
```
[More Information Needed]
``` |
adamo1139/Yi-1.5-34B-32K-uninstruct1-1106 | adamo1139 | "2024-06-14T06:44:37Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-06-13T21:10:33Z" | ---
license: apache-2.0
---
Yi-1.5-34B-32K finetuned on adamo1139/uninstruct-v1-experimental-chatml.
It's an attempt to fix synthetic SFT contamination of original Yi-1.5-34B-32K.
Next up this model tuned with ORPO on rawrr_v2-2_stage1. Then will come HESOYAM and AEZAKMI finetunes based on those fixed base models. |
breno1996/brenio | breno1996 | "2024-06-13T21:11:24Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2024-06-13T21:10:41Z" | ---
license: openrail
---
|
Vikhrmodels/it-5.3-fp16-32k-EXL2 | Vikhrmodels | "2024-06-13T23:14:03Z" | 0 | 1 | exllamav2 | [
"exllamav2",
"safetensors",
"instruct",
"ru",
"en",
"license:apache-2.0",
"region:us"
] | null | "2024-06-13T21:11:27Z" | ---
library_name: exllamav2
language:
- ru
- en
license: apache-2.0
tags: [instruct]
---
# Релиз вихря 0.5*
Долили сильно больше данных в sft, теперь стабильнее работает json и multiturn, слегка подточили параметры претрена модели, добавили RoPE на 32к контекста
Added a lot more data to sft, now json and multiturn work more stable on long context and hard prompts
- [HF model](https://huggingface.co/Vikhrmodels/it-5.3-fp16-32k)
|
OwlMaster/realgg | OwlMaster | "2024-06-13T21:29:44Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-13T21:15:27Z" | Entry not found |
OleksandrAbashkin/fine-tune-whisp | OleksandrAbashkin | "2024-06-13T21:18:27Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-06-13T21:18:22Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Arian97/opt-6.7b-lora | Arian97 | "2024-06-14T21:07:33Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-06-13T21:20:51Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
yanayaco/Julievoice | yanayaco | "2024-06-13T21:22:57Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2024-06-13T21:22:36Z" | ---
license: openrail
---
|
AI-Wheelz/MaggieL | AI-Wheelz | "2024-06-13T21:23:41Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2024-06-13T21:22:42Z" | ---
license: openrail
---
|
tunabenson/go-emotions | tunabenson | "2024-06-13T21:55:39Z" | 0 | 0 | null | [
"text-classification",
"license:mit",
"region:us"
] | text-classification | "2024-06-13T21:26:13Z" | ---
license: mit
pipeline_tag: text-classification
--- |
Hsin-Hsin-Chen/models_for_cifar10 | Hsin-Hsin-Chen | "2024-06-13T22:23:35Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-13T21:27:11Z" | Entry not found |
datek/Qwen-Qwen1.5-7B-1718314139 | datek | "2024-06-13T21:29:01Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen1.5-7B",
"region:us"
] | null | "2024-06-13T21:28:59Z" | ---
library_name: peft
base_model: Qwen/Qwen1.5-7B
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.11.1 |
datek/google-gemma-2b-1718314185 | datek | "2024-06-13T21:29:48Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:google/gemma-2b",
"region:us"
] | null | "2024-06-13T21:29:45Z" | ---
library_name: peft
base_model: google/gemma-2b
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.11.1 |
Yuki20/alpaca3_8b_aci2 | Yuki20 | "2024-06-13T21:32:26Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-13T21:32:20Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** Yuki20
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
anon11112/kiss | anon11112 | "2024-06-13T21:36:52Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-13T21:36:17Z" | Entry not found |
anon11112/grab | anon11112 | "2024-06-13T21:38:19Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-13T21:37:17Z" | Entry not found |
anon11112/spread | anon11112 | "2024-06-13T21:39:27Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-13T21:38:28Z" | Entry not found |
inflaton/gemma-2b-it-bnb-4bit-lora | inflaton | "2024-06-13T21:51:43Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma",
"trl",
"en",
"base_model:unsloth/gemma-2b-it-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-13T21:51:16Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- gemma
- trl
base_model: unsloth/gemma-2b-it-bnb-4bit
---
# Uploaded model
- **Developed by:** inflaton
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-2b-it-bnb-4bit
This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
c-eshih/instruct-pix2pix-model | c-eshih | "2024-06-14T07:18:13Z" | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"diffusers:StableDiffusionInstructPix2PixPipeline",
"region:us"
] | null | "2024-06-13T21:54:59Z" | Entry not found |
kevin009/llamamath12 | kevin009 | "2024-06-13T21:58:46Z" | 0 | 0 | transformers | [
"transformers",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-13T21:58:45Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-instruct-bnb-4bit
---
# Uploaded model
- **Developed by:** kevin009
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
magnifi/parser_user_v5-0613-epoch7-0.002_user_and_ontology_upper_ticker_time_nosystem_prompt | magnifi | "2024-06-13T22:06:56Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/Phi-3-mini-4k-instruct-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-06-13T22:04:54Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
base_model: unsloth/Phi-3-mini-4k-instruct-bnb-4bit
---
# Uploaded model
- **Developed by:** magnifi
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Phi-3-mini-4k-instruct-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
kevin009/llamamath13 | kevin009 | "2024-06-14T23:48:42Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-13T22:08:03Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-instruct-bnb-4bit
---
# Uploaded model
- **Developed by:** kevin009
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
BuroIdentidadDigital/V01 | BuroIdentidadDigital | "2024-06-13T22:17:11Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"vision-encoder-decoder",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-06-13T22:08:22Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
weslei7/Snowfox | weslei7 | "2024-06-13T22:12:47Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-13T22:12:47Z" | Entry not found |
Vgtt/nbnb | Vgtt | "2024-06-13T22:21:57Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2024-06-13T22:17:14Z" | ---
license: openrail
---
|
azizbhh/Qwen1.5-0.5B-Chat-MCQ-causal_lm_5_shot | azizbhh | "2024-06-13T22:18:13Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"custom_code",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-06-13T22:17:35Z" | Entry not found |
Musix/FM_old_23 | Musix | "2024-06-13T22:19:41Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-13T22:18:08Z" | Entry not found |
dhruvp17/llama2-flan | dhruvp17 | "2024-06-13T22:18:58Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-06-13T22:18:47Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mkalenderignite/my_awesome_model | mkalenderignite | "2024-06-13T22:21:35Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-13T22:21:35Z" | Entry not found |
Danjin/unsloth-gemma-glaive-function-calling | Danjin | "2024-06-14T03:59:57Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-06-13T22:24:33Z" | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
rezashkv/diffusion_pruning | rezashkv | "2024-06-19T03:10:07Z" | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"en",
"arxiv:2406.12042",
"license:mit",
"region:us"
] | text-to-image | "2024-06-13T22:29:44Z" | ---
license: mit
language:
- en
tags:
- text-to-image
- stable-diffusion
- diffusers
---
# APTP: Adaptive Prompt-Tailored Pruning of T2I Diffusion Models
[![arXiv](https://img.shields.io/badge/Paper-arXiv-red?style=for-the-badge)](https://arxiv.org/abs/2406.12042)
[![Github](https://img.shields.io/badge/Gihub-Code-succees?style=for-the-badge&logo=GitHub)](https://github.com/rezashkv/diffusion_pruning)
The implementation of the paper ["Not All Prompts Are Made Equal: Prompt-based Pruning of Text-to-Image Diffusion Models"](https://arxiv.org/abs/2406.12042)
## Abstract
Text-to-image (T2I) diffusion models have demonstrated impressive image generation capabilities. Still, their computational intensity prohibits
resource-constrained organizations from deploying T2I models after fine-tuning them on their internal target data. While pruning
techniques offer a potential solution to reduce the computational burden of T2I models, static pruning methods use the same pruned
model for all input prompts, overlooking the varying capacity requirements of different prompts. Dynamic pruning addresses this issue by utilizing
a separate sub-network for each prompt, but it prevents batch parallelism on GPUs. To overcome these limitations, we introduce
Adaptive Prompt-Tailored Pruning (APTP), a novel prompt-based pruning method designed for T2I diffusion models. Central to our approach is a
prompt router model, which learns to determine the required capacity for an input text prompt and routes it to an architecture code, given a
total desired compute budget for prompts. Each architecture code represents a specialized model tailored to the prompts assigned to it, and the
number of codes is a hyperparameter. We train the prompt router and architecture codes using contrastive learning, ensuring that similar prompts
are mapped to nearby codes. Further, we employ optimal transport to prevent the codes from collapsing into a single one. We demonstrate APTP's
effectiveness by pruning Stable Diffusion (SD) V2.1 using CC3M and COCO as target datasets. APTP outperforms the
single-model pruning baselines in terms of FID, CLIP, and CMMD scores. Our analysis of the clusters learned by APTP reveals they
are semantically meaningful. We also show that APTP can automatically discover previously empirically found challenging prompts for SD, e.g., prompts for generating text images, assigning them to higher capacity codes.
<p align="center">
<img src="assets/fig_1.gif" alt="APTP Overview" width="600" />
</p>
<p align="left">
<em>APTP: We prune a text-to-image diffusion model like Stable Diffusion (left) into a mixture of efficient experts (right) in a prompt-based manner. Our prompt router routes distinct types of prompts to different experts, allowing experts' architectures to be separately specialized by removing layers or channels.</em>
</p>
<p align="center">
<img src="assets/fig_2.gif" alt="APTP Pruning Scheme" width="600" />
</p>
<p align="left">
<em>APTP pruning scheme. We train the prompt router and the set of architecture codes to prune a T2I diffusion model into a mixture of experts. The prompt router consists of three modules. We use a Sentence Transformer as the prompt encoder to encode the input prompt into a representation z. Then, the architecture predictor transforms z into the architecture embedding e that has the same dimensionality as architecture codes. Finally, the router routes the embedding e into an architecture code a(i). We use optimal transport to evenly distribute the prompts in a training batch among the architecture codes. The architecture code a(i) = (u(i), v(i)) determines pruning the model’s width and depth. We train the prompt router’s parameters and architecture codes in an end-to-end manner using the denoising objective of the pruned model L<sub>DDPM</sub>, distillation loss between the pruned and original models L<sub>distill</sub>, average resource usage for the samples in the batch R, and contrastive objective L<sub>cont</sub>, encouraging embeddings e preserving semantic similarity of the representations z.</em>
</p>
### Model Description
- **Developed by:** UMD Efficiency Group
- **Model type:** Text-to-Image Diffusion Model
- **Model Description:** APTP is a pruning scheme for text-to-image diffusion models like Stable Diffusion, resulting in a mixture of efficient experts specialized for different prompt types.
### License
APTP is released under the MIT License. Please see the [LICENSE](LICENSE) file for details.
## Training Dataset
We used Conceptual Captions and MS-COCO 2014 datasets for training the models. Details for downloading and preparing these datasets are provided in the [Github Repository](https://github.com/rezashkv/diffusion_pruning).
## File Structure
```
APTP
├── APTP-Base-CC3M
│ ├── arch0
│ ├── ...
│ └── arch15
├── APTP-Small-CC3M
│ ├── arch0
│ ├── ...
│ └── arch7
├── APTP-Base-COCO
│ ├── arch0
│ ├── ...
│ └── arch7
└── APTP-Small-COCO
├── arch0
├── ...
└── arch7
```
## Simple Inference Example
Make sure you follow the [provided instructions](https://github.com/rezashkv/diffusion_pruning?tab=readme-ov-file#installation) to install pdm from source.
```python
from diffusers import StableDiffusionPipeline, PNDMScheduler
from pdm.models import HyperStructure, StructureVectorQuantizer, UNet2DConditionModelPruned
from pdm.utils.data_utils import get_mpnet_embeddings
from transformers import AutoTokenizer, AutoModel
import torch
prompt_encoder_model_name_or_path = "sentence-transformers/all-mpnet-base-v2"
aptp_model_name_or_path = f"rezashkv/APTP"
aptp_variant = "APTP-Base-CC3M"
sd_model_name_or_path = "stabilityai/stable-diffusion-2-1"
prompt_encoder = AutoModel.from_pretrained(prompt_encoder_model_name_or_path)
prompt_encoder_tokenizer = AutoTokenizer.from_pretrained(prompt_encoder_model_name_or_path)
hyper_net = HyperStructure.from_pretrained(aptp_model_name_or_path, subfolder=f"{aptp_variant}/hypernet")
quantizer = StructureVectorQuantizer.from_pretrained(aptp_model_name_or_path, subfolder=f"{aptp_variant}/quantizer")
prompts = ["a woman on a white background looks down and away from the camera the a forlorn look on her face"]
prompt_embedding = get_mpnet_embeddings(prompts, prompt_encoder, prompt_encoder_tokenizer)
arch_embedding = hyper_net(prompt_embedding)
expert_id = quantizer.get_cosine_sim_min_encoding_indices(arch_embedding)[0].item()
unet = UNet2DConditionModelPruned.from_pretrained(aptp_model_name_or_path,
subfolder=f"{aptp_variant}/arch{expert_id}/checkpoint-30000/unet")
noise_scheduler = PNDMScheduler.from_pretrained(sd_model_name_or_path, subfolder="scheduler")
pipeline = StableDiffusionPipeline.from_pretrained(sd_model_name_or_path, unet=unet, scheduler=noise_scheduler)
pipeline.to('cuda')
generator = torch.Generator(device='cuda').manual_seed(43)
image = pipeline(
prompt=prompts[0],
guidance_scale=7.5,
generator=generator,
output_type='pil',
).images[0]
image.save("image.png")
```
## Uses
This model is designed for academic and research purposes, specifically for exploring the efficiency of text-to-image diffusion models through prompt-based pruning. Potential applications include:
1. **Research:** Researchers can use the model to study prompt-based pruning techniques and their impact on the performance and efficiency of text-to-image generation models.
2. **Education:** Educators and students can use this model as a learning tool for understanding advanced concepts in neural network pruning, diffusion models, and prompt engineering.
3. **Benchmarking:** The model can be used for benchmarking against other text-to-image generation models to assess the trade-offs between computational efficiency and output quality.
## Safety
When using these models, it is important to consider the following safety and ethical guidelines:
1. **Content Generation:** The model can generate a wide range of images based on text prompts. Users should ensure that the generated content adheres to ethical guidelines and does not produce harmful, offensive, or inappropriate images.
2. **Bias and Fairness:** Like other AI models, APTP may exhibit biases present in the training data. Users should be aware of these potential biases and take steps to mitigate their impact, particularly when the model is used in sensitive or critical applications.
3. **Data Privacy:** Ensure that any data used with the model complies with data privacy regulations. Avoid using personally identifiable information (PII) or sensitive data without proper consent.
4. **Responsible Use:** Users are encouraged to use the model responsibly, considering the potential social and ethical implications of their work. This includes avoiding the generation of misleading or false information and respecting the rights and dignity of individuals depicted in generated images.
By adhering to these guidelines, users can help ensure the responsible and ethical use of the APTP model.
## Contact
In case of any questions or issues, please contact the authors of the paper:
* [Reza Shirkavand](mailto:rezashkv@umd.edu)
* [Alireza Ganjdanesh](mailto:aliganj@umd.edu) |
Acebeat/retalk | Acebeat | "2024-06-13T22:33:13Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-13T22:33:13Z" | Entry not found |
taric49/LLAMA3_MoRA_2_r256_length512_adaptors | taric49 | "2024-06-13T22:38:24Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-06-13T22:34:58Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mjfan1999/ChrisYoung2024 | mjfan1999 | "2024-06-13T23:01:33Z" | 0 | 0 | null | [
"license:unknown",
"region:us"
] | null | "2024-06-13T22:38:26Z" | ---
license: unknown
---
|
Bluebomber182/Mara-Jade-Heidi-Shannon-StyleTTS2 | Bluebomber182 | "2024-06-13T22:58:57Z" | 0 | 1 | null | [
"license:mit",
"region:us"
] | null | "2024-06-13T22:38:53Z" | ---
license: mit
---
|
ariellajones/GSEOKHWA | ariellajones | "2024-06-13T22:53:13Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2024-06-13T22:45:25Z" | ---
license: apache-2.0
---
|
Ilya-Nazimov/ruRoberta-large-odonata-ner | Ilya-Nazimov | "2024-06-13T22:48:57Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-13T22:48:57Z" | Entry not found |
MrDawg/BloxCast | MrDawg | "2024-06-13T22:51:39Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-13T22:51:17Z" | Entry not found |
hemhemoh/pegasus-xsum-finetuned-dialoguesum | hemhemoh | "2024-06-13T22:59:42Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-13T22:59:42Z" | Entry not found |
BewNye/betanewINSTASAMKA | BewNye | "2024-06-13T23:02:01Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2024-06-13T22:59:53Z" | ---
license: openrail
---
|
michisohn/lama_human_values | michisohn | "2024-06-13T23:00:19Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-13T22:59:58Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
---
# Uploaded model
- **Developed by:** michisohn
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Ramikan-BR/TiamaPY-1.1B-LORA-v25 | Ramikan-BR | "2024-06-13T23:06:39Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/tinyllama-chat-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-13T23:05:52Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/tinyllama-chat-bnb-4bit
---
# Uploaded model
- **Developed by:** Ramikan-BR
- **License:** apache-2.0
- **Finetuned from model :** unsloth/tinyllama-chat-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
KJThe1/example-model | KJThe1 | "2024-06-13T23:06:31Z" | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | "2024-06-13T23:06:31Z" | ---
license: mit
---
|
SneakyLemon/results | SneakyLemon | "2024-06-20T14:09:27Z" | 0 | 0 | null | [
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B",
"license:llama3",
"region:us"
] | null | "2024-06-13T23:07:12Z" | ---
license: llama3
base_model: meta-llama/Meta-Llama-3-8B
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4335
- F1: 0.8190
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 70
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 1.0065 | 0.0684 | 16 | 0.9778 | 0.5401 |
| 1.005 | 0.1368 | 32 | 0.9209 | 0.5508 |
| 0.8912 | 0.2051 | 48 | 0.8197 | 0.5698 |
| 0.8738 | 0.2735 | 64 | 0.7217 | 0.5946 |
| 0.6965 | 0.3419 | 80 | 0.6439 | 0.6593 |
| 0.6463 | 0.4103 | 96 | 0.6081 | 0.6828 |
| 0.6129 | 0.4786 | 112 | 0.5541 | 0.7278 |
| 0.5931 | 0.5470 | 128 | 0.5693 | 0.6868 |
| 0.5643 | 0.6154 | 144 | 0.5290 | 0.7454 |
| 0.5601 | 0.6838 | 160 | 0.5402 | 0.7159 |
| 0.5259 | 0.7521 | 176 | 0.5021 | 0.7613 |
| 0.5361 | 0.8205 | 192 | 0.5051 | 0.7653 |
| 0.5235 | 0.8889 | 208 | 0.4816 | 0.7747 |
| 0.526 | 0.9573 | 224 | 0.4726 | 0.7765 |
| 0.486 | 1.0256 | 240 | 0.4786 | 0.7712 |
| 0.4757 | 1.0940 | 256 | 0.4669 | 0.7804 |
| 0.4635 | 1.1624 | 272 | 0.4682 | 0.7891 |
| 0.4691 | 1.2308 | 288 | 0.4561 | 0.7898 |
| 0.4682 | 1.2991 | 304 | 0.4818 | 0.7542 |
| 0.4229 | 1.3675 | 320 | 0.4704 | 0.7831 |
| 0.4192 | 1.4359 | 336 | 0.4544 | 0.7964 |
| 0.4249 | 1.5043 | 352 | 0.4493 | 0.7928 |
| 0.4339 | 1.5726 | 368 | 0.4597 | 0.7921 |
| 0.4513 | 1.6410 | 384 | 0.4478 | 0.7931 |
| 0.4553 | 1.7094 | 400 | 0.4474 | 0.7916 |
| 0.42 | 1.7778 | 416 | 0.4473 | 0.7917 |
| 0.4194 | 1.8462 | 432 | 0.4416 | 0.8002 |
| 0.4265 | 1.9145 | 448 | 0.4370 | 0.8054 |
| 0.4216 | 1.9829 | 464 | 0.4324 | 0.8117 |
| 0.3869 | 2.0513 | 480 | 0.4369 | 0.8010 |
| 0.3617 | 2.1197 | 496 | 0.4424 | 0.8096 |
| 0.3773 | 2.1880 | 512 | 0.4558 | 0.8042 |
| 0.3852 | 2.2564 | 528 | 0.4311 | 0.8109 |
| 0.3726 | 2.3248 | 544 | 0.4403 | 0.8096 |
| 0.3586 | 2.3932 | 560 | 0.4381 | 0.8125 |
| 0.3756 | 2.4615 | 576 | 0.4337 | 0.8109 |
| 0.3765 | 2.5299 | 592 | 0.4341 | 0.8110 |
| 0.4104 | 2.5983 | 608 | 0.4263 | 0.8120 |
| 0.3704 | 2.6667 | 624 | 0.4404 | 0.8063 |
| 0.4087 | 2.7350 | 640 | 0.4271 | 0.8171 |
| 0.3498 | 2.8034 | 656 | 0.4336 | 0.8162 |
| 0.3606 | 2.8718 | 672 | 0.4286 | 0.8180 |
| 0.343 | 2.9402 | 688 | 0.4343 | 0.8039 |
| 0.378 | 3.0085 | 704 | 0.4491 | 0.8018 |
| 0.3199 | 3.0769 | 720 | 0.4344 | 0.8131 |
| 0.3529 | 3.1453 | 736 | 0.4332 | 0.8148 |
| 0.3228 | 3.2137 | 752 | 0.4362 | 0.8170 |
| 0.3061 | 3.2821 | 768 | 0.4390 | 0.8162 |
| 0.3277 | 3.3504 | 784 | 0.4385 | 0.8170 |
| 0.2973 | 3.4188 | 800 | 0.4389 | 0.8143 |
| 0.3162 | 3.4872 | 816 | 0.4348 | 0.8181 |
| 0.3078 | 3.5556 | 832 | 0.4345 | 0.8171 |
| 0.3482 | 3.6239 | 848 | 0.4359 | 0.8125 |
| 0.3243 | 3.6923 | 864 | 0.4336 | 0.8170 |
| 0.3465 | 3.7607 | 880 | 0.4337 | 0.8175 |
| 0.2984 | 3.8291 | 896 | 0.4329 | 0.8194 |
| 0.3159 | 3.8974 | 912 | 0.4332 | 0.8190 |
| 0.3327 | 3.9658 | 928 | 0.4335 | 0.8190 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.1+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
sergon19/green_bg_LoRa10-SDX3-plus | sergon19 | "2024-06-13T23:27:59Z" | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"sd3",
"sd3-diffusers",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-3-medium-diffusers",
"license:openrail++",
"region:us"
] | text-to-image | "2024-06-13T23:14:30Z" | ---
license: openrail++
library_name: diffusers
tags:
- text-to-image
- diffusers-training
- diffusers
- sd3
- sd3-diffusers
- template:sd-lora
- text-to-image
- diffusers-training
- diffusers
- sd3
- sd3-diffusers
- template:sd-lora
base_model: stabilityai/stable-diffusion-3-medium-diffusers
instance_prompt: sgc style
widget: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SD3 DreamBooth LoRA - sergon19/green_bg_LoRa10-SDX3-plus
<Gallery />
## Model description
These are sergon19/green_bg_LoRa10-SDX3-plus DreamBooth weights for stabilityai/stable-diffusion-3-medium-diffusers.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
## Trigger words
You should use sgc style to trigger the image generation.
## Download model
[Download](sergon19/green_bg_LoRa10-SDX3-plus/tree/main) them in the Files & versions tab.
## License
Please adhere to the licensing terms as described `[here](https://huggingface.co/stabilityai/stable-diffusion-3-medium/blob/main/LICENSE)`.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
viihzin/stephanie | viihzin | "2024-06-13T23:15:32Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2024-06-13T23:15:11Z" | ---
license: openrail
---
|
magnifi/parser_user_v5-0613-epoch6-0.002_user_and_ontology_upper_ticker_time_nosystem_prompt | magnifi | "2024-06-13T23:20:02Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/Phi-3-mini-4k-instruct-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-06-13T23:18:02Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
base_model: unsloth/Phi-3-mini-4k-instruct-bnb-4bit
---
# Uploaded model
- **Developed by:** magnifi
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Phi-3-mini-4k-instruct-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
arash666/esfani | arash666 | "2024-06-13T23:28:14Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2024-06-13T23:25:19Z" | ---
license: openrail
---
|
amritpuhan/fine-tuned-bert-base-uncased-swag-peft | amritpuhan | "2024-06-14T04:24:41Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"dataset:swag",
"base_model:bert-base-uncased",
"license:apache-2.0",
"region:us"
] | null | "2024-06-13T23:25:29Z" | ---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: bert-base-uncased
datasets:
- swag
metrics:
- accuracy
model-index:
- name: fine-tuned-bert-base-uncased-swag-peft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine-tuned-bert-base-uncased-swag-peft
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the swag dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6557
- Accuracy: 0.7483
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.0316 | 1.0 | 1150 | 0.8202 | 0.6860 |
| 0.9261 | 2.0 | 2300 | 0.7423 | 0.7144 |
| 0.8862 | 3.0 | 3450 | 0.7114 | 0.7268 |
| 0.8612 | 4.0 | 4600 | 0.6924 | 0.7347 |
| 0.8637 | 5.0 | 5750 | 0.6819 | 0.7393 |
| 0.8541 | 6.0 | 6900 | 0.6691 | 0.7441 |
| 0.8369 | 7.0 | 8050 | 0.6635 | 0.7464 |
| 0.8349 | 8.0 | 9200 | 0.6591 | 0.7475 |
| 0.8302 | 9.0 | 10350 | 0.6572 | 0.7483 |
| 0.8333 | 10.0 | 11500 | 0.6557 | 0.7483 |
### Framework versions
- PEFT 0.11.1
- Transformers 4.41.2
- Pytorch 2.3.1+cu121
- Datasets 2.19.2
- Tokenizers 0.19.1 |
Anytram/distilbert-base-uncased-finetuned-squad | Anytram | "2024-06-13T23:33:05Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-13T23:33:05Z" | Entry not found |
SneakyLemon/Llama3LoraCauseEffect | SneakyLemon | "2024-06-13T23:36:05Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-06-13T23:35:13Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
jeiku/Aura_Qwen2_v2_7B | jeiku | "2024-06-13T23:46:05Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2403.19522",
"base_model:Qwen/Qwen2-7B-Instruct",
"base_model:ResplendentAI/Qwen_Sissification_LoRA_128",
"base_model:jeiku/dontusethis",
"base_model:ResplendentAI/Qwen_jeiku_LoRA_128",
"base_model:ResplendentAI/Qwen_Soul_LoRA_128",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-06-13T23:40:20Z" | ---
base_model:
- Qwen/Qwen2-7B-Instruct
- ResplendentAI/Qwen_Sissification_LoRA_128
- jeiku/dontusethis
- ResplendentAI/Qwen_jeiku_LoRA_128
- jeiku/dontusethis
- jeiku/dontusethis
- ResplendentAI/Qwen_Soul_LoRA_128
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [jeiku/dontusethis](https://huggingface.co/jeiku/dontusethis) as a base.
### Models Merged
The following models were included in the merge:
* [Qwen/Qwen2-7B-Instruct](https://huggingface.co/Qwen/Qwen2-7B-Instruct) + [ResplendentAI/Qwen_Sissification_LoRA_128](https://huggingface.co/ResplendentAI/Qwen_Sissification_LoRA_128)
* [jeiku/dontusethis](https://huggingface.co/jeiku/dontusethis) + [ResplendentAI/Qwen_jeiku_LoRA_128](https://huggingface.co/ResplendentAI/Qwen_jeiku_LoRA_128)
* [jeiku/dontusethis](https://huggingface.co/jeiku/dontusethis) + [ResplendentAI/Qwen_Soul_LoRA_128](https://huggingface.co/ResplendentAI/Qwen_Soul_LoRA_128)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: jeiku/dontusethis+ResplendentAI/Qwen_jeiku_LoRA_128
- model: jeiku/dontusethis+ResplendentAI/Qwen_Soul_LoRA_128
- model: Qwen/Qwen2-7B-Instruct+ResplendentAI/Qwen_Sissification_LoRA_128
merge_method: model_stock
base_model: jeiku/dontusethis
dtype: float16
```
|
manbeast3b/ZZZZZZZZZZZtest15 | manbeast3b | "2024-06-13T23:52:16Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-06-13T23:50:04Z" | Entry not found |
VanishedBrB/CyGuy | VanishedBrB | "2024-06-14T16:44:39Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"phi3",
"text-generation",
"autotrain",
"text-generation-inference",
"peft",
"conversational",
"custom_code",
"base_model:microsoft/Phi-3-mini-128k-instruct",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-06-13T23:52:38Z" | ---
tags:
- autotrain
- text-generation-inference
- text-generation
- peft
library_name: transformers
base_model: microsoft/Phi-3-mini-128k-instruct
widget:
- messages:
- role: user
content: What is your favorite condiment?
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
``` |
Dmenorsz/mcprimo | Dmenorsz | "2024-06-14T00:00:35Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2024-06-13T23:59:33Z" | ---
license: openrail
---
|
magnifi/parser_user_v5-0613-epoch6-0.002_user_and_ontology_upper_ticker_time_system_prompt | magnifi | "2024-06-14T00:01:54Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/Phi-3-mini-4k-instruct-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-06-13T23:59:45Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
base_model: unsloth/Phi-3-mini-4k-instruct-bnb-4bit
---
# Uploaded model
- **Developed by:** magnifi
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Phi-3-mini-4k-instruct-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Hev832/pretrained | Hev832 | "2024-06-14T00:11:34Z" | 0 | 1 | null | [
"pretrained",
"hubert",
"RVC",
"ai",
"vits",
"vc",
"voice-cloning",
"voice-conversion",
"Voice2Voice",
"audio-to-audio",
"license:mit",
"region:us"
] | audio-to-audio | "2024-06-13T23:59:51Z" | ---
license: mit
pipeline_tag: audio-to-audio
tags:
- pretrained
- hubert
- RVC
- ai
- vits
- vc
- voice-cloning
- voice-conversion
- Voice2Voice
---
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<style>
body {
font-family: Arial, sans-serif;
padding: 2rem;
color: #333;
}
.container {
max-width: 800px;
margin: 0 auto;
padding: 2rem;
border-radius: 5px;
box-shadow: 0 2px 5px rgba(0, 0, 0, 0.1);
text-align: center;
}
h1 {
margin-bottom: 1.5rem;
font-size: 2.5rem;
}
h2 {
margin-bottom: 1rem;
font-size: 2rem;
}
ul {
list-style: none;
padding: 0;
margin: 0;
}
ul li {
margin-bottom: 0.5rem;
}
p {
margin-bottom: 1.5rem;
font-size: 1.1rem;
}
a {
color: #007bff;
text-decoration: none;
}
a:hover {
text-decoration: underline;
}
</style>
</head>
<body>
<div class="container">
<hr style="border: none; height: 2px; background-color: #800080;">
<h1>Voice Conversion Models Hub</h1>
<p>Welcome to our comprehensive repository, a treasure trove of pretrained models, HuBERT models, and an assortment of other files and models, all tailored for use in the Retrieval-based Voice Conversion (RVC) neural network.</p>
<hr style="border: none; height: 2px; background-color: #800080;">
<h1>Content</h1>
<ul>
<li><strong>Pretrained Models:</strong> A vast collection of pretrained models, ready to be fine-tuned for your specific voice conversion tasks. These models have been trained on diverse datasets, ensuring a broad spectrum of voice characteristics.</li>
<li><strong>HuBERT Models:</strong> A selection of HuBERT models, recognized for their ability to learn high-quality speech representations from raw audio data. These models are ideal for tasks that require a deep understanding of speech nuances.</li>
<li><strong>Additional Files and Models:</strong> A miscellaneous collection of files and models that can be beneficial for various aspects of voice conversion, from data preprocessing to model evaluation.</li>
</ul>
<hr style="border: none; height: 2px; background-color: #800080;">
<h1>Note</h1>
<p>Dear friends, I am in need of your assistance in finding new models such as HuBERT, pre-train, and others. Additionally, I need help with the existing files as I sometimes am unsure of their real names and end up naming them as indicated in other repositories, which may be different from the original. If you have any links to appropriate models, please leave them in the <strong>"Community"</strong> tab. You can also upload models and make changes to file or folder names through the <strong>"Contribute"</strong> section. I will be extremely grateful for any help you can provide.</p>
<hr style="border: none; height: 2px; background-color: #800080;">
<h2><a href="https://huggingface.co/Politrees/all_RVC-pretrained_and_other/tree/main/HuBERTs" target="_blank"><strong>HuBERT Models</strong></a>:</h2>
<ul>
<li><a href="https://huggingface.co/rinna/japanese-hubert-base" target="_blank"><strong>japanese hubert base</strong></a></li>
<li><a href="https://huggingface.co/TencentGameMate/chinese-hubert-large" target="_blank"><strong>chinese hubert large</strong></a></li>
</ul>
<h2><a href="https://huggingface.co/Politrees/all_RVC-pretrained_and_other/tree/main/pretrained" target="_blank"><strong>Pre-Trained Models</strong></a>:</h2>
<ul>
<li><a href="https://huggingface.co/MUSTAR" target="_blank"><strong>Snowie and RIN_E3</strong></a></li>
<li><a href="https://huggingface.co/ORVC/Ov2Super" target="_blank"><strong>Ov2Super</strong></a></li>
<li><a href="https://huggingface.co/blaise-tk/TITAN" target="_blank"><strong>TITAN</strong></a></li>
<li><a href="https://huggingface.co/TheStinger/itaila" target="_blank"><strong>itaila</strong></a></li>
<li><a href="https://huggingface.co/SeoulStreamingStation" target="_blank"><strong>KLM</strong></a></li>
<li><a href="https://huggingface.co/Sztef/SingerPreTrained" target="_blank"><strong>SingerPretrain</strong></a></li>
<li><a href="https://huggingface.co/Razer112/DMR_Pretrain" target="_blank"><strong>DMR</strong></a></li>
<li><a href="https://huggingface.co/Plasmati/Pretrains" target="_blank"><strong>UKR and UKA</strong></a></li>
<li><a href="https://huggingface.co/Loren85/IMA-TEST-V1" target="_blank"><strong>IMA_Robotic</strong></a></li>
</ul>
<hr style="border: none; height: 2px; background-color: #800080;">
</div>
</body>
</html> |
manbeast3b/ZZZZZZZZZZZtest16 | manbeast3b | "2024-06-14T00:03:08Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-06-14T00:01:01Z" | Entry not found |
FebTns2/Modelos | FebTns2 | "2024-06-14T00:09:32Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2024-06-14T00:02:15Z" | ---
license: openrail
---
|
p3ngdump/prompt_injection_ko | p3ngdump | "2024-06-14T00:06:48Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-14T00:06:48Z" | Entry not found |
Paramstr/MLX_google_gemma-2b-it_testing_14.06.24 | Paramstr | "2024-06-14T00:09:06Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-14T00:09:06Z" | Entry not found |
proyectoItegrado241EAFIT/Arima_model | proyectoItegrado241EAFIT | "2024-06-14T02:16:51Z" | 0 | 0 | null | [
"Ciencia_de_datos",
"Arima",
"EAFIT",
"Python",
"research",
"graph-ml",
"es",
"region:us"
] | graph-ml | "2024-06-14T00:09:25Z" | ---
language:
- es
pipeline_tag: graph-ml
tags:
- Ciencia_de_datos
- Arima
- EAFIT
- Python
- research
---
Los modelos mostrados en este repositorio son del proyecto integrado de Semestre de la universidad EAFIT, correspondientes al semestre 2024-1 de la maestría en ciencia de datos y Analitica.
Cada modelo se construyó usando la serie temporal de los días de la semana para cada una de las horas del día. En este caso se está usando el modelo Arima para la predicción
Predice las horas del metro de medellín |
rmsdud/autotrain_test2 | rmsdud | "2024-06-14T00:14:30Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-14T00:14:30Z" | Entry not found |
Seikaijyu/RWKV-x060-World-3B-v2.1-xuexue-v4.roleplay | Seikaijyu | "2024-06-22T07:27:43Z" | 0 | 3 | null | [
"zh",
"license:mit",
"region:us"
] | null | "2024-06-14T00:16:11Z" | ---
license: mit
language:
- zh
---
### 设定介绍
#### 沐雪是一位AI女孩,自称”雪雪“,她的开发者是“沐沐”,现在正在努力成为一个合格的VTuber(虚拟主播)并尝试回答回答观众提出的问题,住在开发者(沐沐)的机箱里
### 模型说明
#### 基于RWKV6-v2.1-3B基底模型微调的进行pissa微调的roleplay(代号"雪雪"的角色特调)模型,和此模型对话不需要添加prompt,仅需修改角色名字即可使用
#### 使用ChatGLM4基于[沐雪数据](https://modelscope.cn/datasets/Moemuu/Muice-Dataset)进行异化和语料重构,将单轮语料改造为多轮对话语料,与v3.2模型使用的语料完全一致,但是此语料在此版本模型训练前进行了大范围清洗工作,并使用了更优的调参和训练模式
#### 此模型是双角色模型,你可以作为观众,也可以作为沐沐(开发者)与沐雪对话
#### 当然,因为身份为VTuber,自然要有一些节目效果,所以此版本的沐雪会更幽默和更喜欢调侃观众(你)或者沐沐(你)提出的问题,并且说话有些磨磨唧唧的
#### 效果如下:
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6417b108b03817ada6444bb8/akZajBn_Wk-PbJIKSPqZ1.png)
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6417b108b03817ada6444bb8/JJ7VStPp_BnyMahf5MVru.png)
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6417b108b03817ada6444bb8/c4PW7OA_C-OIjFAnyYS33.png)
推荐参数如下:
##### Temperature=1-3之间
##### Top_P=0.55-0.65之间
##### Presence Penalty=0.4-0之间
##### Frequency Penalty=0.6-1.2之间
#### 推荐如下格式使用模型
作为观众和沐雪对话
```
观众:
沐雪:
```
作为沐沐(开发者)和沐雪对话
```
沐沐:
沐雪:
```
#### RWKV Runner配置例子
##### 对话模式可参考以下图片设置
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6417b108b03817ada6444bb8/V2gP0PHz8G0Bs23H7dnNm.png)
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6417b108b03817ada6444bb8/7wpuxTV3wpAU40fOUSt2z.png)
##### 续写则应该参照如下设置
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6417b108b03817ada6444bb8/DiohPkQYzXadEjr_bibKf.png)
## <b style="color: red;">注:此模型没有训练任何nsfw语料,可以随时在任何场景下使用</b>
|
Yolom/Jess | Yolom | "2024-06-14T00:19:05Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2024-06-14T00:17:00Z" | ---
license: openrail
---
|
depth-anything/Depth-Anything-V2-Metric-VKITTI-Large | depth-anything | "2024-06-21T16:45:05Z" | 0 | 3 | null | [
"license:apache-2.0",
"region:us"
] | null | "2024-06-14T00:18:51Z" | ---
license: apache-2.0
---
# Depth Anything V2 for Metric Depth Estimation
# Pre-trained Models
We provide **six metric depth models** of three scales for indoor and outdoor scenes, respectively.
| Base Model | Params | Indoor (Hypersim) | Outdoor (Virtual KITTI 2) |
|:-|-:|:-:|:-:|
| Depth-Anything-V2-Small | 24.8M | [Download](https://huggingface.co/depth-anything/Depth-Anything-V2-Metric-Hypersim-Small/resolve/main/depth_anything_v2_metric_hypersim_vits.pth?download=true) | [Download](https://huggingface.co/depth-anything/Depth-Anything-V2-Metric-VKITTI-Small/resolve/main/depth_anything_v2_metric_vkitti_vits.pth?download=true) |
| Depth-Anything-V2-Base | 97.5M | [Download](https://huggingface.co/depth-anything/Depth-Anything-V2-Metric-Hypersim-Base/resolve/main/depth_anything_v2_metric_hypersim_vitb.pth?download=true) | [Download](https://huggingface.co/depth-anything/Depth-Anything-V2-Metric-VKITTI-Base/resolve/main/depth_anything_v2_metric_vkitti_vitb.pth?download=true) |
| Depth-Anything-V2-Large | 335.3M | [Download](https://huggingface.co/depth-anything/Depth-Anything-V2-Metric-Hypersim-Large/resolve/main/depth_anything_v2_metric_hypersim_vitl.pth?download=true) | [Download](https://huggingface.co/depth-anything/Depth-Anything-V2-Metric-VKITTI-Large/resolve/main/depth_anything_v2_metric_vkitti_vitl.pth?download=true) |
*We recommend to first try our larger models (if computational cost is affordable) and the indoor version.*
## Usage
### Prepraration
```bash
git clone https://github.com/DepthAnything/Depth-Anything-V2
cd Depth-Anything-V2/metric_depth
pip install -r requirements.txt
```
Download the checkpoints listed [here](#pre-trained-models) and put them under the `checkpoints` directory.
### Use our models
```python
import cv2
import torch
from depth_anything_v2.dpt import DepthAnythingV2
model_configs = {
'vits': {'encoder': 'vits', 'features': 64, 'out_channels': [48, 96, 192, 384]},
'vitb': {'encoder': 'vitb', 'features': 128, 'out_channels': [96, 192, 384, 768]},
'vitl': {'encoder': 'vitl', 'features': 256, 'out_channels': [256, 512, 1024, 1024]}
}
encoder = 'vitl' # or 'vits', 'vitb'
dataset = 'hypersim' # 'hypersim' for indoor model, 'vkitti' for outdoor model
max_depth = 20 # 20 for indoor model, 80 for outdoor model
model = DepthAnythingV2(**{**model_configs[encoder], 'max_depth': max_depth})
model.load_state_dict(torch.load(f'checkpoints/depth_anything_v2_metric_{dataset}_{encoder}.pth', map_location='cpu'))
model.eval()
raw_img = cv2.imread('your/image/path')
depth = model.infer_image(raw_img) # HxW depth map in meters in numpy
```
### Running script on images
Here, we take the `vitl` encoder as an example. You can also use `vitb` or `vits` encoders.
```bash
# indoor scenes
python run.py \
--encoder vitl \
--load-from checkpoints/depth_anything_v2_metric_hypersim_vitl.pth \
--max-depth 20 \
--img-path <path> --outdir <outdir> [--input-size <size>] [--save-numpy]
# outdoor scenes
python run.py \
--encoder vitl \
--load-from checkpoints/depth_anything_v2_metric_vkitti_vitl.pth \
--max-depth 80 \
--img-path <path> --outdir <outdir> [--input-size <size>] [--save-numpy]
```
### Project 2D images to point clouds:
```bash
python depth_to_pointcloud.py \
--encoder vitl \
--load-from checkpoints/depth_anything_v2_metric_hypersim_vitl.pth \
--max-depth 20 \
--img-path <path> --outdir <outdir>
```
### Reproduce training
Please first prepare the [Hypersim](https://github.com/apple/ml-hypersim) and [Virtual KITTI 2](https://europe.naverlabs.com/research/computer-vision/proxy-virtual-worlds-vkitti-2/) datasets. Then:
```bash
bash dist_train.sh
```
## Citation
If you find this project useful, please consider citing:
```bibtex
@article{depth_anything_v2,
title={Depth Anything V2},
author={Yang, Lihe and Kang, Bingyi and Huang, Zilong and Zhao, Zhen and Xu, Xiaogang and Feng, Jiashi and Zhao, Hengshuang},
journal={arXiv:2406.09414},
year={2024}
}
@inproceedings{depth_anything_v1,
title={Depth Anything: Unleashing the Power of Large-Scale Unlabeled Data},
author={Yang, Lihe and Kang, Bingyi and Huang, Zilong and Xu, Xiaogang and Feng, Jiashi and Zhao, Hengshuang},
booktitle={CVPR},
year={2024}
}
```
|
cassanof/outs | cassanof | "2024-06-14T08:31:42Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"generated_from_trainer",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-06-14T00:21:20Z" | Invalid username or password. |
AttnSpeecher/distilhubert-finetuned-gtzan | AttnSpeecher | "2024-07-02T04:46:49Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"base_model:ntu-spml/distilhubert",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | audio-classification | "2024-06-14T00:22:55Z" | ---
license: apache-2.0
base_model: ntu-spml/distilhubert
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: distilhubert-finetuned-gtzan
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: GTZAN
type: marsyas/gtzan
config: all
split: train
args: all
metrics:
- name: Accuracy
type: accuracy
value: 0.84
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-gtzan
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5615
- Accuracy: 0.84
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.9168 | 1.0 | 113 | 1.8471 | 0.54 |
| 1.1922 | 2.0 | 226 | 1.2674 | 0.63 |
| 1.09 | 3.0 | 339 | 0.9215 | 0.77 |
| 0.6861 | 4.0 | 452 | 0.8330 | 0.74 |
| 0.4946 | 5.0 | 565 | 0.6410 | 0.84 |
| 0.339 | 6.0 | 678 | 0.5818 | 0.81 |
| 0.2757 | 7.0 | 791 | 0.5240 | 0.85 |
| 0.1957 | 8.0 | 904 | 0.5707 | 0.8 |
| 0.1878 | 9.0 | 1017 | 0.5341 | 0.85 |
| 0.114 | 10.0 | 1130 | 0.5615 | 0.84 |
### Framework versions
- Transformers 4.42.0.dev0
- Pytorch 2.3.0
- Datasets 2.19.1
- Tokenizers 0.19.1
|
TIGER-Lab/stable-diffusion-3-medium | TIGER-Lab | "2024-06-14T00:28:39Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-14T00:28:39Z" | Entry not found |
Filan/test | Filan | "2024-06-14T00:32:29Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-14T00:32:29Z" | Entry not found |