modelId
stringlengths 4
122
| author
stringlengths 2
42
| last_modified
unknown | downloads
int64 0
392M
| likes
int64 0
6.56k
| library_name
stringclasses 368
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 51
values | createdAt
unknown | card
stringlengths 1
1M
|
---|---|---|---|---|---|---|---|---|---|
Kwbvet/temp-model | Kwbvet | "2024-01-30T03:08:51Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-01-26T07:35:01Z" | Entry not found |
hoang230095/repo_name | hoang230095 | "2024-01-26T07:36:33Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-01-26T07:36:33Z" | Entry not found |
anything66/tt | anything66 | "2024-01-26T07:37:25Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-01-26T07:37:25Z" | Entry not found |
SatishFaction/Whisper_Finetuning | SatishFaction | "2024-01-26T07:37:36Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-01-26T07:37:36Z" | Entry not found |
berkemavissss/musicberke2024 | berkemavissss | "2024-01-26T07:39:57Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-01-26T07:38:13Z" | Entry not found |
Alikatana/htrrr | Alikatana | "2024-01-26T07:41:34Z" | 0 | 0 | null | [
"license:other",
"region:us"
] | null | "2024-01-26T07:41:34Z" | ---
license: other
license_name: lice
license_link: LICENSE
---
|
RustedNature/Taxi-v3 | RustedNature | "2024-01-26T07:42:06Z" | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2024-01-26T07:42:04Z" | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
model = load_from_hub(repo_id="RustedNature/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
|
lijiacai/pp-matting | lijiacai | "2024-01-26T10:16:48Z" | 0 | 0 | null | [
"paddlepaddle",
"region:us"
] | null | "2024-01-26T07:43:10Z" | Entry not found |
wangyuwy/resnet50 | wangyuwy | "2024-01-26T07:53:18Z" | 0 | 0 | transformers | [
"transformers",
"onnx",
"resnet",
"image-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2024-01-26T07:49:18Z" | Entry not found |
dfapp/anhngu | dfapp | "2024-01-26T07:51:58Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-01-26T07:51:32Z" | Entry not found |
Eelalzep/languagehelp | Eelalzep | "2024-01-26T07:53:19Z" | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | "2024-01-26T07:53:19Z" | ---
license: mit
---
|
ducnutridday/product2 | ducnutridday | "2024-01-26T07:54:17Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-01-26T07:53:41Z" | Entry not found |
DarqueDante/Phi-2-Roblox_Coder | DarqueDante | "2024-01-26T07:57:23Z" | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | "2024-01-26T07:57:22Z" | ---
license: mit
---
|
Davidfory/Girl | Davidfory | "2024-01-26T07:59:22Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-01-26T07:59:18Z" | Entry not found |
gonpow/gonpow | gonpow | "2024-01-26T08:01:34Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-01-26T08:01:34Z" | Entry not found |
mingli/optaeg-v1-fashionminst-tiny-49k | mingli | "2024-01-26T08:38:54Z" | 0 | 0 | null | [
"image-classification",
"dataset:fashion_mnist",
"license:mit",
"region:us"
] | image-classification | "2024-01-26T08:04:13Z" | ---
license: mit
datasets:
- fashion_mnist
metrics:
- accuracy
pipeline_tag: image-classification
---
A tiny fashion-mnist model to demostrate the potential of the learnable activation - OptAEG-V1.
The model can reach 90.2% accuracy with only 48.5k parameters.
The OptAEG-V1 learnable activation is based on a theory of Arithmetic Expression Geometry which is still in developing.
Please visit the draft papers on [theory](https://github.com/mountain/aeg-paper) and [neural networks](https://github.com/mountain/optim-aeg) for a reference |
Chuanming/Skywork-13B-base-GGUF | Chuanming | "2024-01-26T08:51:19Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-01-26T08:05:18Z" | Entry not found |
Kartikeya2002/Chatbot | Kartikeya2002 | "2024-01-26T08:05:39Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-01-26T08:05:39Z" | Entry not found |
jesseyan/tiny_classify_bert | jesseyan | "2024-01-26T08:31:57Z" | 0 | 0 | transformers | [
"transformers",
"onnx",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-01-26T08:11:13Z" | ---
license: apache-2.0
---
|
vierlinglukas/PyramidsRND | vierlinglukas | "2024-01-26T08:14:10Z" | 0 | 0 | ml-agents | [
"ml-agents",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | "2024-01-26T08:14:09Z" | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: vierlinglukas/PyramidsRND
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Redhotchilipoppy/MontelSpotGuesser | Redhotchilipoppy | "2024-01-26T08:16:58Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2024-01-26T08:16:58Z" | ---
license: apache-2.0
---
|
fraisdufour/pokemon-lora | fraisdufour | "2024-01-26T08:19:02Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-01-26T08:19:02Z" | Entry not found |
Gjfrfx3576/ghi | Gjfrfx3576 | "2024-01-26T08:19:43Z" | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | "2024-01-26T08:19:43Z" | ---
license: mit
---
|
Lianghanxin/Aa | Lianghanxin | "2024-01-26T08:23:43Z" | 0 | 0 | null | [
"license:bigscience-openrail-m",
"region:us"
] | null | "2024-01-26T08:23:43Z" | ---
license: bigscience-openrail-m
---
|
ramsi-k/q-FrozenLake-v1-4x4-noSlippery | ramsi-k | "2024-01-26T08:25:03Z" | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2024-01-26T08:25:01Z" | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="ramsi-k/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
yyh12138/SOHEEVocal | yyh12138 | "2024-02-09T11:17:02Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-01-26T08:25:34Z" | Entry not found |
Leoxing/rcnz-backup | Leoxing | "2024-01-26T11:42:23Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-01-26T08:28:34Z" | Entry not found |
PixArt-alpha/PixArt-XL-2-SAM-256x256 | PixArt-alpha | "2024-01-26T08:48:47Z" | 0 | 3 | diffusers | [
"diffusers",
"safetensors",
"license:creativeml-openrail-m",
"region:us"
] | null | "2024-01-26T08:28:37Z" | ---
license: creativeml-openrail-m
---
|
AdityaPandey/zephyr_fine_tune | AdityaPandey | "2024-01-26T11:13:21Z" | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"mistral",
"trl",
"sft",
"generated_from_trainer",
"base_model:TheBloke/zephyr-7B-alpha-GPTQ",
"base_model:adapter:TheBloke/zephyr-7B-alpha-GPTQ",
"license:mit",
"4-bit",
"gptq",
"region:us"
] | null | "2024-01-26T08:28:48Z" | ---
license: mit
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: TheBloke/zephyr-7B-alpha-GPTQ
model-index:
- name: zephyr_fine_tune
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# zephyr_fine_tune
This model is a fine-tuned version of [TheBloke/zephyr-7B-alpha-GPTQ](https://huggingface.co/TheBloke/zephyr-7B-alpha-GPTQ) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 250
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.7.1
- Transformers 4.37.1
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1 |
Facepalm0/q-FrozenLake-v1-4x4-noSlippery | Facepalm0 | "2024-01-26T08:39:20Z" | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2024-01-26T08:39:17Z" | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Facepalm0/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
hir35/game | hir35 | "2024-01-26T08:45:50Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-01-26T08:45:48Z" | Entry not found |
Kuhizu/RestofModels | Kuhizu | "2024-01-26T10:35:18Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-01-26T08:46:40Z" | Entry not found |
Kuhizu/RestOfLoras | Kuhizu | "2024-03-19T12:41:37Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-01-26T08:46:53Z" | Entry not found |
duccd/llama2-vietnamese | duccd | "2024-01-26T08:50:57Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"arxiv:1910.09700",
"region:us"
] | null | "2024-01-26T08:47:28Z" | ---
library_name: peft
base_model: models/vbd-llama2-7B-50b-chat
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1 |
mzbac/Mixtral-8x7B-v0.1-hf-4bit-mlx-adapters | mzbac | "2024-01-26T13:59:49Z" | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | "2024-01-26T08:51:04Z" | ---
license: mit
---
# Qlora adapters for Mixtral-8x7B-v0.1-hf-4bit-mlx
## fine-tuned on guanaco dataset
## inference vis mlx-lm
```
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/Mixtral-8x7B-v0.1-hf-4bit-mlx",adapter_file="adapters.npz")
generate(model=model, tokenizer=tokenizer, prompt="### Human: write a quick sort in python.\n### Assistant: ", max_tokens=500, verbose=True,temp=0.3)
```
## serve as an API Service
```
pip install mlx-llm-server
mlx-llm-server --model-path mlx-community/Mixtral-8x7B-v0.1-hf-4bit-mlx --adapter-file adapters.npz
``` |
Facepalm0/q-Taxi-v3-test | Facepalm0 | "2024-01-26T08:52:34Z" | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2024-01-26T08:52:32Z" | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3-test
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.48 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Facepalm0/q-Taxi-v3-test", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Vivekkotha544/BaseE | Vivekkotha544 | "2024-01-26T08:55:43Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-01-26T08:55:37Z" | Entry not found |
LEE-F/sdsadasd | LEE-F | "2024-04-15T08:53:20Z" | 0 | 0 | diffusers | [
"diffusers",
"translation",
"ab",
"license:apache-2.0",
"region:us"
] | translation | "2024-01-26T08:57:41Z" | ---
license: apache-2.0
language:
- ab
metrics:
- bleurt
library_name: diffusers
pipeline_tag: translation
--- |
Dongwei01/test | Dongwei01 | "2024-01-26T09:01:14Z" | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | "2024-01-26T09:01:14Z" | ---
license: mit
---
|
rAIfle/BagelMIsteryTour-v2-8x7B-exl2-rpcal | rAIfle | "2024-04-24T08:44:29Z" | 0 | 4 | null | [
"mergekit",
"merge",
"arxiv:2311.03099",
"arxiv:2306.01708",
"base_model:Doctor-Shotgun/limarp-zloss-mixtral-8x7b-qlora",
"base_model:merge:Doctor-Shotgun/limarp-zloss-mixtral-8x7b-qlora",
"base_model:Sao10K/Sensualize-Mixtral-bf16",
"base_model:merge:Sao10K/Sensualize-Mixtral-bf16",
"base_model:jondurbin/bagel-dpo-8x7b-v0.2",
"base_model:merge:jondurbin/bagel-dpo-8x7b-v0.2",
"base_model:mistralai/Mixtral-8x7B-Instruct-v0.1",
"base_model:merge:mistralai/Mixtral-8x7B-Instruct-v0.1",
"base_model:mistralai/Mixtral-8x7B-v0.1",
"base_model:merge:mistralai/Mixtral-8x7B-v0.1",
"region:us"
] | null | "2024-01-26T09:02:12Z" | ---
base_model:
- mistralai/Mixtral-8x7B-v0.1
- jondurbin/bagel-dpo-8x7b-v0.2
- Sao10K/Sensualize-Mixtral-bf16
- mistralai/Mixtral-8x7B-v0.1
- Doctor-Shotgun/limarp-zloss-mixtral-8x7b-qlora
- mistralai/Mixtral-8x7B-Instruct-v0.1
tags:
- mergekit
- merge
---
Quantized using 200 samples of 8192 tokens from an RP-oriented [PIPPA](https://huggingface.co/datasets/royallab/PIPPA-cleaned) dataset.
Branches:
- `main` -- `measurement.json`
- `2.25b6h` -- 2.25bpw, 6bit lm_head
- `3b6h` -- 3bpw, 6bit lm_head
- `5b6h` -- 5bpw, 6bit lm_head
- `6b6h` -- 6bpw, 6bit lm_head
Requires ExllamaV2 version 0.0.11 and up.
Original model link: [ycros/BagelMIsteryTour-v2-8x7B](https://huggingface.co/ycros/BagelMIsteryTour-v2-8x7B)
Original model README below.
***
# BagelMIsteryTour-v2-8x7B
[GGUF versions here](https://huggingface.co/ycros/BagelMIsteryTour-v2-8x7B-GGUF)
[AWQ versions here](https://huggingface.co/ycros/BagelMIsteryTour-v2-8x7B-AWQ)
Bagel, Mixtral Instruct, with extra spices. Give it a taste. Works with Alpaca prompt formats, though the Mistral format should also work.
![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/63044fa07373aacccd8a7c53/lxNMzXo_dq_JCP9YyUyaw.jpeg)
I started experimenting around seeing if I could improve or fix some of Bagel's problems. Totally inspired by seeing how well Doctor-Shotgun's Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss worked (which is a LimaRP tune on top of base Mixtral, and then merged with Mixtral Instruct) - I decided to try some merges of Bagel with Mixtral Instruct as a result.
Somehow I ended up here, Bagel, Mixtral Instruct, a little bit of LimaRP, a little bit of Sao10K's Sensualize. So far in my testing it's working very well, and while it seems fairly unaligned on a lot of stuff, it's maybe a little too aligned on a few specific things (which I think comes from Sensualize) - so that's something to play with in the future, or maybe try to DPO out.
I've been running (temp last) minP 0.1, dynatemp 0.5-4, rep pen 1.07, rep range 1024. I've been testing Alpaca style Instruction/Response, and Instruction/Input/Response and those seem to work well, I expect Mistral's prompt format would also work well. You may need to add a stopping string on "{{char}}:" for RPs because it can sometimes duplicate those out in responses and waffle on. Seems to hold up and not fall apart at long contexts like Bagel and some other Mixtral tunes seem to, definitely doesn't seem prone to loopyness either. Can be pushed into extravagant prose if the scene/setting calls for it.
__Version 2:__ lowered the mix of Sensualize.
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [mistralai/Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1) as a base.
### Models Merged
The following models were included in the merge:
* [jondurbin/bagel-dpo-8x7b-v0.2](https://huggingface.co/jondurbin/bagel-dpo-8x7b-v0.2)
* [Sao10K/Sensualize-Mixtral-bf16](https://huggingface.co/Sao10K/Sensualize-Mixtral-bf16)
* [mistralai/Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1) + [Doctor-Shotgun/limarp-zloss-mixtral-8x7b-qlora](https://huggingface.co/Doctor-Shotgun/limarp-zloss-mixtral-8x7b-qlora)
* [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model: mistralai/Mixtral-8x7B-v0.1
models:
- model: mistralai/Mixtral-8x7B-v0.1+Doctor-Shotgun/limarp-zloss-mixtral-8x7b-qlora
parameters:
density: 0.5
weight: 0.2
- model: Sao10K/Sensualize-Mixtral-bf16
parameters:
density: 0.5
weight: 0.1
- model: mistralai/Mixtral-8x7B-Instruct-v0.1
parameters:
density: 0.6
weight: 1.0
- model: jondurbin/bagel-dpo-8x7b-v0.2
parameters:
density: 0.6
weight: 0.5
merge_method: dare_ties
dtype: bfloat16
```
|
bazla24/pegasus-samsum | bazla24 | "2024-01-26T09:08:17Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-01-26T09:08:16Z" | Entry not found |
vaal/cols | vaal | "2024-01-26T14:18:59Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2024-01-26T09:09:08Z" | ---
license: apache-2.0
---
|
csukuangfj/icefall_librispeech_streaming_pruned_transducer_stateless4_20220625 | csukuangfj | "2024-01-26T09:16:39Z" | 0 | 0 | null | [
"tensorboard",
"license:apache-2.0",
"region:us"
] | null | "2024-01-26T09:11:01Z" | ---
license: apache-2.0
---
This repo is forked from
https://huggingface.co/pkufool/icefall_librispeech_streaming_pruned_transducer_stateless4_20220625
See https://github.com/k2-fsa/icefall/pull/380 |
hasiburrahman/ppo-LunarLander-v2 | hasiburrahman | "2024-01-26T09:13:50Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2024-01-26T09:13:32Z" | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 280.56 +/- 16.93
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
mehdismi/llama | mehdismi | "2024-01-26T09:19:39Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2024-01-26T09:19:39Z" | ---
license: apache-2.0
---
|
zhaoxinwind/bi_classifier | zhaoxinwind | "2024-01-26T09:22:42Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-01-26T09:22:42Z" | Entry not found |
robthewrecker/rvc-models | robthewrecker | "2024-02-24T05:59:52Z" | 0 | 1 | null | [
"license:openrail",
"region:us"
] | null | "2024-01-26T09:24:02Z" | ---
license: openrail
---
|
Munditos3D/Primero | Munditos3D | "2024-01-26T09:26:15Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-01-26T09:24:02Z" | Entry not found |
zhaoxinwind/cust_classifier | zhaoxinwind | "2024-01-26T09:25:34Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-01-26T09:25:34Z" | Entry not found |
Suryaprakash63/AVENGERS.s | Suryaprakash63 | "2024-01-26T09:29:55Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2024-01-26T09:29:55Z" | ---
license: apache-2.0
---
|
csukuangfj/icefall_asr_wenetspeech_pruned_transducer_stateless5_streaming | csukuangfj | "2024-01-26T09:39:01Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-01-26T09:31:34Z" | This repo is forked from
https://huggingface.co/luomingshuang/icefall_asr_wenetspeech_pruned_transducer_stateless5_streaming
See https://github.com/k2-fsa/icefall/pull/447 . |
NicoBarti/p2 | NicoBarti | "2024-01-26T09:54:07Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-01-26T09:54:07Z" | Entry not found |
Owentaku/distilbert-base-uncased-finetuned-imdb | Owentaku | "2024-01-29T12:36:13Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"fill-mask",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2024-01-26T09:56:28Z" | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4118
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7024 | 1.0 | 157 | 2.4965 |
| 2.5792 | 2.0 | 314 | 2.4280 |
| 2.5354 | 3.0 | 471 | 2.4508 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
jendragora/pipka | jendragora | "2024-01-26T10:02:02Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-01-26T10:01:12Z" | tiny very fluffy rabbit eats carrots |
Federic/lora-fine-tuning-llama2-SQL-lora-1000-10-dataset-size-sqlcoder | Federic | "2024-01-26T10:01:27Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-01-26T10:01:25Z" | Entry not found |
hojzas/setfit-proj8-multilabel | hojzas | "2024-01-26T10:07:59Z" | 0 | 0 | setfit | [
"setfit",
"safetensors",
"mpnet",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"dataset:hojzas/proj8-multilabel",
"arxiv:2209.11055",
"base_model:sentence-transformers/paraphrase-mpnet-base-v2",
"base_model:finetune:sentence-transformers/paraphrase-mpnet-base-v2",
"co2_eq_emissions",
"region:us"
] | text-classification | "2024-01-26T10:07:33Z" | ---
library_name: setfit
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
datasets:
- hojzas/proj8-multilabel
metrics:
- accuracy
widget:
- text: 'def first_with_given_key(iterable, key=lambda x: x):\n keys_used = {}\n for
item in iterable:\n rp = repr(key(item))\n if rp not in keys_used.keys():\n keys_used[rp]
= repr(item)\n yield item'
- text: 'def first_with_given_key(iterable, key=lambda x: x):\n keys=[]\n for
i in iterable:\n if key(i) not in keys:\n yield i\n keys.append(key(i))'
- text: 'def first_with_given_key(iterable, key=repr):\n set_of_keys = set()\n lambda_key
= (lambda x: key(x))\n for item in iterable:\n key = lambda_key(item)\n try:\n key_for_set
= hash(key)\n except TypeError:\n key_for_set = repr(key)\n if
key_for_set in set_of_keys:\n continue\n set_of_keys.add(key_for_set)\n yield
item'
- text: 'def first_with_given_key(iterable, key = lambda x: x):\n found_keys={}\n for
i in iterable:\n if key(i) not in found_keys.keys():\n found_keys[key(i)]=i\n yield
i'
- text: 'def first_with_given_key(the_iterable, key=lambda x: x):\n temp_keys=[]\n for
i in range(len(the_iterable)):\n if (key(the_iterable[i]) not in temp_keys):\n temp_keys.append(key(the_iterable[i]))\n yield
the_iterable[i]\n del temp_keys'
pipeline_tag: text-classification
inference: false
co2_eq_emissions:
emissions: 0.2716104726718793
source: codecarbon
training_type: fine-tuning
on_cloud: false
cpu_model: Intel(R) Xeon(R) Silver 4314 CPU @ 2.40GHz
ram_total_size: 251.49160385131836
hours_used: 0.005
base_model: sentence-transformers/paraphrase-mpnet-base-v2
---
# SetFit with sentence-transformers/paraphrase-mpnet-base-v2
This is a [SetFit](https://github.com/huggingface/setfit) model trained on the [hojzas/proj8-multilabel](https://huggingface.co/datasets/hojzas/proj8-multilabel) dataset that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) as the Sentence Transformer embedding model. A OneVsRestClassifier instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2)
- **Classification head:** a OneVsRestClassifier instance
- **Maximum Sequence Length:** 512 tokens
<!-- - **Number of Classes:** Unknown -->
- **Training Dataset:** [hojzas/proj8-multilabel](https://huggingface.co/datasets/hojzas/proj8-multilabel)
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("hojzas/setfit-proj8-multilabel")
# Run inference
preds = model("def first_with_given_key(iterable, key=lambda x: x):\n keys=[]\n for i in iterable:\n if key(i) not in keys:\n yield i\n keys.append(key(i))")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:--------|:----|
| Word count | 43 | 92.5185 | 125 |
### Training Hyperparameters
- batch_size: (16, 16)
- num_epochs: (1, 1)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 20
- body_learning_rate: (2e-05, 2e-05)
- head_learning_rate: 2e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0147 | 1 | 0.3001 | - |
| 0.7353 | 50 | 0.0104 | - |
### Environmental Impact
Carbon emissions were measured using [CodeCarbon](https://github.com/mlco2/codecarbon).
- **Carbon Emitted**: 0.000 kg of CO2
- **Hours Used**: 0.005 hours
### Training Hardware
- **On Cloud**: No
- **GPU Model**: No GPU used
- **CPU Model**: Intel(R) Xeon(R) Silver 4314 CPU @ 2.40GHz
- **RAM Size**: 251.49 GB
### Framework Versions
- Python: 3.10.12
- SetFit: 1.0.3
- Sentence Transformers: 2.2.2
- Transformers: 4.36.1
- PyTorch: 2.1.2+cu121
- Datasets: 2.14.7
- Tokenizers: 0.15.1
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
isaacekblad/dendrite | isaacekblad | "2024-01-26T10:07:41Z" | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2024-01-26T10:07:39Z" | ---
license: creativeml-openrail-m
---
|
vierlinglukas/a2c-PandaReachDense-v3 | vierlinglukas | "2024-01-26T10:14:17Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2024-01-26T10:10:04Z" | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.19 +/- 0.11
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
lilac1126/ebrug | lilac1126 | "2024-01-26T10:14:11Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2024-01-26T10:13:30Z" | ---
license: openrail
---
|
camenduru/drh1rlmaernlsoa8 | camenduru | "2024-01-26T10:19:42Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-01-26T10:16:51Z" | Entry not found |
VideoCrafter/VideoCrafter2 | VideoCrafter | "2024-01-26T10:40:21Z" | 0 | 33 | null | [
"license:apache-2.0",
"region:us"
] | null | "2024-01-26T10:20:37Z" | ---
license: apache-2.0
---
|
amirmatrix/shadmehr | amirmatrix | "2024-01-26T10:57:11Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2024-01-26T10:21:19Z" | ---
license: openrail
---
|
mik3lo/test | mik3lo | "2024-01-26T10:22:30Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2024-01-26T10:22:30Z" | ---
license: apache-2.0
---
|
ashishbaraiya/causal-model-fintuned | ashishbaraiya | "2024-01-26T10:22:38Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-01-26T10:22:38Z" | Entry not found |
ktanku/CHERRYBULLET | ktanku | "2024-01-26T10:37:24Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-01-26T10:35:57Z" | Entry not found |
arths/divine-v7 | arths | "2024-01-26T10:53:59Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-01-26T10:36:08Z" | Entry not found |
csukuangfj/icefall-asr-conv-emformer-transducer-stateless2-zh | csukuangfj | "2024-01-26T10:43:12Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2024-01-26T10:36:18Z" | ---
license: apache-2.0
---
This repo is forked from
https://huggingface.co/PingfengLuo/icefall-asr-conv-emformer-transducer-stateless2-zh
## Chinese-English-mixed ASR model using icefall_conv_emformer2
### Wenetspeech testset results
| TEST_NET | TEST_MEETING |
|----------|--------------|
| 9.64 | 9.2 | |
as log in `decoding_results/modified_beam_search_result`
### Training commond
```
python3 conv_emformer_transducer_stateless2/train.py --world-size 8 --num-epochs 30 --start-epoch 1 --exp-dir conv_emformer_transducer_stateless2/exp --max-duration 400 --master-port 12321 --num-encoder-layers 12 --chunk-length 32 --cnn-module-kernel 31 --left-context-length 32 --right-context-length 8 --memory-size 32
```
### Model unit is char+bpe as `data/lang_char_bpe/tokens.txt`
|
CocoReLiso/Secuencia_1 | CocoReLiso | "2024-01-26T10:37:33Z" | 0 | 0 | null | [
"license:unknown",
"region:us"
] | null | "2024-01-26T10:37:32Z" | ---
license: unknown
---
|
Federic/lora-fine-tuning-llama2-SQL-lora-codellama | Federic | "2024-01-26T12:01:27Z" | 0 | 0 | null | [
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:codellama/CodeLlama-7b-hf",
"base_model:finetune:codellama/CodeLlama-7b-hf",
"license:llama2",
"region:us"
] | null | "2024-01-26T10:38:37Z" | ---
license: llama2
base_model: codellama/CodeLlama-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: lora-fine-tuning-llama2-SQL-lora-codellama
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lora-fine-tuning-llama2-SQL-lora-codellama
This model is a fine-tuned version of [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5720
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 3
- total_train_batch_size: 12
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7803 | 0.06 | 5 | 2.5059 |
| 1.2647 | 0.12 | 10 | 1.1731 |
| 0.8026 | 0.18 | 15 | 0.8834 |
| 0.6482 | 0.24 | 20 | 0.8281 |
| 0.8146 | 0.3 | 25 | 0.7858 |
| 0.7458 | 0.36 | 30 | 0.7275 |
| 0.5377 | 0.42 | 35 | 0.6520 |
| 0.5659 | 0.48 | 40 | 0.6560 |
| 0.6104 | 0.54 | 45 | 0.6101 |
| 0.6253 | 0.6 | 50 | 0.6024 |
| 0.4878 | 0.66 | 55 | 0.5891 |
| 0.4777 | 0.72 | 60 | 0.5830 |
| 0.634 | 0.78 | 65 | 0.5831 |
| 0.5562 | 0.84 | 70 | 0.5771 |
| 0.4696 | 0.9 | 75 | 0.5734 |
| 0.4193 | 0.96 | 80 | 0.5720 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
fxu/fxu | fxu | "2024-01-26T10:43:06Z" | 0 | 0 | null | [
"license:unknown",
"region:us"
] | null | "2024-01-26T10:40:06Z" | ---
license: unknown
---
|
HinaBl/Yoclesh | HinaBl | "2024-01-26T10:41:49Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2024-01-26T10:40:39Z" | ---
license: openrail
---
|
ktanku/SHINEE | ktanku | "2024-01-26T10:49:02Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-01-26T10:44:38Z" | Entry not found |
max-at-Parami/whisper-small-zh-hk-3 | max-at-Parami | "2024-01-26T10:45:44Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-01-26T10:45:44Z" | Entry not found |
Augustya07/Llama-2-7b-hf-neitzsche-books-adapters | Augustya07 | "2024-01-26T10:46:52Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-01-26T10:46:49Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Augustya07/Llama-2-7b-hf-neitzsche-books | Augustya07 | "2024-01-26T10:47:29Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-01-26T10:47:17Z" | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
kamatchipandian/Pandian | kamatchipandian | "2024-01-26T10:48:59Z" | 0 | 1 | null | [
"region:us"
] | null | "2024-01-26T10:48:59Z" | Entry not found |
CyberHarem/kristen_arknights | CyberHarem | "2024-03-22T22:02:02Z" | 0 | 0 | null | [
"art",
"not-for-all-audiences",
"text-to-image",
"dataset:CyberHarem/kristen_arknights",
"license:mit",
"region:us"
] | text-to-image | "2024-01-26T10:49:39Z" | ---
license: mit
datasets:
- CyberHarem/kristen_arknights
pipeline_tag: text-to-image
tags:
- art
- not-for-all-audiences
---
# LoRA model of Kristen Wright (Arknights)
## What Is This?
This is the LoRA model of waifu Kristen Wright (Arknights).
## How Is It Trained?
* This model is trained with [kohya-ss/sd-scripts](https://github.com/kohya-ss/sd-scripts), and the test images are generated with [a1111's webui](AUTOMATIC1111/stable-diffusion-webui) and [API sdk](https://github.com/mix1009/sdwebuiapi).
* The [auto-training framework](https://github.com/deepghs/cyberharem) is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The architecture of base model is is `SD1.5`.
* Dataset used for training is the `stage3-p480-1200` in [CyberHarem/kristen_arknights](https://huggingface.co/datasets/CyberHarem/kristen_arknights), which contains 197 images.
* **Trigger word is `kristen_arknights`.**
* Pruned core tags for this waifu are `long hair, animal ears, blonde hair, dog ears, dog girl, hairband, blue eyes, black hairband, floppy ears, breasts, very long hair`. You can add them to the prompt when some features of waifu (e.g. hair color) are not stable.
* For more details in training, you can take a look at [training configuration file](https://huggingface.co/CyberHarem/kristen_arknights/resolve/main/train.toml).
* For more details in LoRA, you can download it, and read the metadata with a1111's webui.
## How to Use It?
After downloading the safetensors files for the specified step, you need to use them like common LoRA.
* Recommended LoRA weight is 0.5-0.85.
* Recommended trigger word weight is 0.7-1.1.
For example, if you want to use the model from step 531, you need to download [`531/kristen_arknights.safetensors`](https://huggingface.co/CyberHarem/kristen_arknights/resolve/main/531/kristen_arknights.safetensors) as LoRA. By using this model, you can generate images for the desired characters.
## Which Step Should I Use?
We selected 5 good steps for you to choose. The best one is step 531.
760 images (783.07 MiB) were generated for auto-testing.
![Metrics Plot](metrics_plot.png)
The base model used for generating preview images is [meinamix_v11](https://huggingface.co/meinamix_v11).
Here are the preview of the recommended steps:
| Step | Epoch | CCIP | AI Corrupt | Bikini Plus | Score | Download | pattern_0_0 | pattern_0_1 | pattern_1 | portrait_0 | portrait_1 | portrait_2 | full_body_0 | full_body_1 | profile_0 | profile_1 | free_0 | free_1 | shorts | maid_0 | maid_1 | miko | yukata | suit | china | bikini_0 | bikini_1 | bikini_2 | sit | squat | kneel | jump | crossed_arms | angry | smile | cry | grin | n_lie_0 | n_lie_1 | n_stand_0 | n_stand_1 | n_stand_2 | n_sex_0 | n_sex_1 |
|-------:|--------:|:----------|:-------------|:--------------|:----------|:--------------------------------------------------------------------------------------------------------|:----------------------------------------------|:----------------------------------------------|:------------------------------------------|:--------------------------------------------|:--------------------------------------------|:--------------------------------------------|:----------------------------------------------|:----------------------------------------------|:------------------------------------------|:------------------------------------------|:------------------------------------|:------------------------------------|:------------------------------------|:------------------------------------|:------------------------------------|:--------------------------------|:------------------------------------|:--------------------------------|:----------------------------------|:----------------------------------------|:----------------------------------------|:----------------------------------------|:------------------------------|:----------------------------------|:----------------------------------|:--------------------------------|:------------------------------------------------|:----------------------------------|:----------------------------------|:------------------------------|:--------------------------------|:--------------------------------------|:--------------------------------------|:------------------------------------------|:------------------------------------------|:------------------------------------------|:--------------------------------------|:--------------------------------------|
| 531 | 9 | **0.931** | 0.974 | 0.848 | **0.762** | [Download](https://huggingface.co/CyberHarem/kristen_arknights/resolve/main/531/kristen_arknights.zip) | ![pattern_0_0](531/previews/pattern_0_0.png) | ![pattern_0_1](531/previews/pattern_0_1.png) | ![pattern_1](531/previews/pattern_1.png) | ![portrait_0](531/previews/portrait_0.png) | ![portrait_1](531/previews/portrait_1.png) | ![portrait_2](531/previews/portrait_2.png) | ![full_body_0](531/previews/full_body_0.png) | ![full_body_1](531/previews/full_body_1.png) | ![profile_0](531/previews/profile_0.png) | ![profile_1](531/previews/profile_1.png) | ![free_0](531/previews/free_0.png) | ![free_1](531/previews/free_1.png) | ![shorts](531/previews/shorts.png) | ![maid_0](531/previews/maid_0.png) | ![maid_1](531/previews/maid_1.png) | ![miko](531/previews/miko.png) | ![yukata](531/previews/yukata.png) | ![suit](531/previews/suit.png) | ![china](531/previews/china.png) | ![bikini_0](531/previews/bikini_0.png) | ![bikini_1](531/previews/bikini_1.png) | ![bikini_2](531/previews/bikini_2.png) | ![sit](531/previews/sit.png) | ![squat](531/previews/squat.png) | ![kneel](531/previews/kneel.png) | ![jump](531/previews/jump.png) | ![crossed_arms](531/previews/crossed_arms.png) | ![angry](531/previews/angry.png) | ![smile](531/previews/smile.png) | ![cry](531/previews/cry.png) | ![grin](531/previews/grin.png) | ![n_lie_0](531/previews/n_lie_0.png) | ![n_lie_1](531/previews/n_lie_1.png) | ![n_stand_0](531/previews/n_stand_0.png) | ![n_stand_1](531/previews/n_stand_1.png) | ![n_stand_2](531/previews/n_stand_2.png) | ![n_sex_0](531/previews/n_sex_0.png) | ![n_sex_1](531/previews/n_sex_1.png) |
| 708 | 12 | 0.925 | 0.982 | 0.844 | 0.732 | [Download](https://huggingface.co/CyberHarem/kristen_arknights/resolve/main/708/kristen_arknights.zip) | ![pattern_0_0](708/previews/pattern_0_0.png) | ![pattern_0_1](708/previews/pattern_0_1.png) | ![pattern_1](708/previews/pattern_1.png) | ![portrait_0](708/previews/portrait_0.png) | ![portrait_1](708/previews/portrait_1.png) | ![portrait_2](708/previews/portrait_2.png) | ![full_body_0](708/previews/full_body_0.png) | ![full_body_1](708/previews/full_body_1.png) | ![profile_0](708/previews/profile_0.png) | ![profile_1](708/previews/profile_1.png) | ![free_0](708/previews/free_0.png) | ![free_1](708/previews/free_1.png) | ![shorts](708/previews/shorts.png) | ![maid_0](708/previews/maid_0.png) | ![maid_1](708/previews/maid_1.png) | ![miko](708/previews/miko.png) | ![yukata](708/previews/yukata.png) | ![suit](708/previews/suit.png) | ![china](708/previews/china.png) | ![bikini_0](708/previews/bikini_0.png) | ![bikini_1](708/previews/bikini_1.png) | ![bikini_2](708/previews/bikini_2.png) | ![sit](708/previews/sit.png) | ![squat](708/previews/squat.png) | ![kneel](708/previews/kneel.png) | ![jump](708/previews/jump.png) | ![crossed_arms](708/previews/crossed_arms.png) | ![angry](708/previews/angry.png) | ![smile](708/previews/smile.png) | ![cry](708/previews/cry.png) | ![grin](708/previews/grin.png) | ![n_lie_0](708/previews/n_lie_0.png) | ![n_lie_1](708/previews/n_lie_1.png) | ![n_stand_0](708/previews/n_stand_0.png) | ![n_stand_1](708/previews/n_stand_1.png) | ![n_stand_2](708/previews/n_stand_2.png) | ![n_sex_0](708/previews/n_sex_0.png) | ![n_sex_1](708/previews/n_sex_1.png) |
| 885 | 15 | 0.921 | 0.981 | 0.844 | 0.717 | [Download](https://huggingface.co/CyberHarem/kristen_arknights/resolve/main/885/kristen_arknights.zip) | ![pattern_0_0](885/previews/pattern_0_0.png) | ![pattern_0_1](885/previews/pattern_0_1.png) | ![pattern_1](885/previews/pattern_1.png) | ![portrait_0](885/previews/portrait_0.png) | ![portrait_1](885/previews/portrait_1.png) | ![portrait_2](885/previews/portrait_2.png) | ![full_body_0](885/previews/full_body_0.png) | ![full_body_1](885/previews/full_body_1.png) | ![profile_0](885/previews/profile_0.png) | ![profile_1](885/previews/profile_1.png) | ![free_0](885/previews/free_0.png) | ![free_1](885/previews/free_1.png) | ![shorts](885/previews/shorts.png) | ![maid_0](885/previews/maid_0.png) | ![maid_1](885/previews/maid_1.png) | ![miko](885/previews/miko.png) | ![yukata](885/previews/yukata.png) | ![suit](885/previews/suit.png) | ![china](885/previews/china.png) | ![bikini_0](885/previews/bikini_0.png) | ![bikini_1](885/previews/bikini_1.png) | ![bikini_2](885/previews/bikini_2.png) | ![sit](885/previews/sit.png) | ![squat](885/previews/squat.png) | ![kneel](885/previews/kneel.png) | ![jump](885/previews/jump.png) | ![crossed_arms](885/previews/crossed_arms.png) | ![angry](885/previews/angry.png) | ![smile](885/previews/smile.png) | ![cry](885/previews/cry.png) | ![grin](885/previews/grin.png) | ![n_lie_0](885/previews/n_lie_0.png) | ![n_lie_1](885/previews/n_lie_1.png) | ![n_stand_0](885/previews/n_stand_0.png) | ![n_stand_1](885/previews/n_stand_1.png) | ![n_stand_2](885/previews/n_stand_2.png) | ![n_sex_0](885/previews/n_sex_0.png) | ![n_sex_1](885/previews/n_sex_1.png) |
| 2478 | 42 | 0.915 | 0.985 | 0.847 | 0.696 | [Download](https://huggingface.co/CyberHarem/kristen_arknights/resolve/main/2478/kristen_arknights.zip) | ![pattern_0_0](2478/previews/pattern_0_0.png) | ![pattern_0_1](2478/previews/pattern_0_1.png) | ![pattern_1](2478/previews/pattern_1.png) | ![portrait_0](2478/previews/portrait_0.png) | ![portrait_1](2478/previews/portrait_1.png) | ![portrait_2](2478/previews/portrait_2.png) | ![full_body_0](2478/previews/full_body_0.png) | ![full_body_1](2478/previews/full_body_1.png) | ![profile_0](2478/previews/profile_0.png) | ![profile_1](2478/previews/profile_1.png) | ![free_0](2478/previews/free_0.png) | ![free_1](2478/previews/free_1.png) | ![shorts](2478/previews/shorts.png) | ![maid_0](2478/previews/maid_0.png) | ![maid_1](2478/previews/maid_1.png) | ![miko](2478/previews/miko.png) | ![yukata](2478/previews/yukata.png) | ![suit](2478/previews/suit.png) | ![china](2478/previews/china.png) | ![bikini_0](2478/previews/bikini_0.png) | ![bikini_1](2478/previews/bikini_1.png) | ![bikini_2](2478/previews/bikini_2.png) | ![sit](2478/previews/sit.png) | ![squat](2478/previews/squat.png) | ![kneel](2478/previews/kneel.png) | ![jump](2478/previews/jump.png) | ![crossed_arms](2478/previews/crossed_arms.png) | ![angry](2478/previews/angry.png) | ![smile](2478/previews/smile.png) | ![cry](2478/previews/cry.png) | ![grin](2478/previews/grin.png) | ![n_lie_0](2478/previews/n_lie_0.png) | ![n_lie_1](2478/previews/n_lie_1.png) | ![n_stand_0](2478/previews/n_stand_0.png) | ![n_stand_1](2478/previews/n_stand_1.png) | ![n_stand_2](2478/previews/n_stand_2.png) | ![n_sex_0](2478/previews/n_sex_0.png) | ![n_sex_1](2478/previews/n_sex_1.png) |
| 1947 | 33 | 0.912 | **0.988** | **0.851** | 0.691 | [Download](https://huggingface.co/CyberHarem/kristen_arknights/resolve/main/1947/kristen_arknights.zip) | ![pattern_0_0](1947/previews/pattern_0_0.png) | ![pattern_0_1](1947/previews/pattern_0_1.png) | ![pattern_1](1947/previews/pattern_1.png) | ![portrait_0](1947/previews/portrait_0.png) | ![portrait_1](1947/previews/portrait_1.png) | ![portrait_2](1947/previews/portrait_2.png) | ![full_body_0](1947/previews/full_body_0.png) | ![full_body_1](1947/previews/full_body_1.png) | ![profile_0](1947/previews/profile_0.png) | ![profile_1](1947/previews/profile_1.png) | ![free_0](1947/previews/free_0.png) | ![free_1](1947/previews/free_1.png) | ![shorts](1947/previews/shorts.png) | ![maid_0](1947/previews/maid_0.png) | ![maid_1](1947/previews/maid_1.png) | ![miko](1947/previews/miko.png) | ![yukata](1947/previews/yukata.png) | ![suit](1947/previews/suit.png) | ![china](1947/previews/china.png) | ![bikini_0](1947/previews/bikini_0.png) | ![bikini_1](1947/previews/bikini_1.png) | ![bikini_2](1947/previews/bikini_2.png) | ![sit](1947/previews/sit.png) | ![squat](1947/previews/squat.png) | ![kneel](1947/previews/kneel.png) | ![jump](1947/previews/jump.png) | ![crossed_arms](1947/previews/crossed_arms.png) | ![angry](1947/previews/angry.png) | ![smile](1947/previews/smile.png) | ![cry](1947/previews/cry.png) | ![grin](1947/previews/grin.png) | ![n_lie_0](1947/previews/n_lie_0.png) | ![n_lie_1](1947/previews/n_lie_1.png) | ![n_stand_0](1947/previews/n_stand_0.png) | ![n_stand_1](1947/previews/n_stand_1.png) | ![n_stand_2](1947/previews/n_stand_2.png) | ![n_sex_0](1947/previews/n_sex_0.png) | ![n_sex_1](1947/previews/n_sex_1.png) |
## Anything Else?
Because the automation of LoRA training always annoys some people. So for the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
## All Steps
We uploaded the files in all steps. you can check the images, metrics and download them in the following links:
* [Steps From 1947 to 3540](all/0.md)
* [Steps From 177 to 1770](all/1.md)
|
xyfJASON/Context-Encoder-pytorch | xyfJASON | "2024-01-26T11:00:29Z" | 0 | 0 | null | [
"tensorboard",
"license:mit",
"region:us"
] | null | "2024-01-26T10:50:41Z" | ---
license: mit
---
Checkpoints and training logs for GitHub repository: [xyfJASON/Context-Encoder-pytorch](https://github.com/xyfJASON/Context-Encoder-pytorch).
|
derlockia/mistral | derlockia | "2024-01-26T10:50:58Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-01-26T10:50:57Z" | Entry not found |
MonsterMMORPG/bigkz | MonsterMMORPG | "2024-02-08T18:58:49Z" | 0 | 0 | null | [
"tensorboard",
"region:us"
] | null | "2024-01-26T10:51:36Z" | Entry not found |
joiortega1/prueba_llm | joiortega1 | "2024-01-29T09:43:40Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-01-26T10:52:55Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
dadsdaff/eyee | dadsdaff | "2024-01-26T10:56:13Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2024-01-26T10:55:42Z" | ---
license: openrail
---
|
sergeipetrov/swin2SR-classical-sr-x2-64-IE | sergeipetrov | "2024-02-01T09:51:05Z" | 0 | 0 | null | [
"endpoints_compatible",
"region:us"
] | null | "2024-01-26T10:56:24Z" | Entry not found |
Geeer/ffy | Geeer | "2024-01-26T11:07:11Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-01-26T10:58:11Z" | Entry not found |
xyfJASON/PartialConv-Inpainting-pytorch | xyfJASON | "2024-01-26T11:16:19Z" | 0 | 0 | null | [
"tensorboard",
"license:mit",
"region:us"
] | null | "2024-01-26T11:02:46Z" | ---
license: mit
---
Checkpoints and training logs for GitHub repository: [xyfJASON/PartialConv-Inpainting-pytorch](https://github.com/xyfJASON/PartialConv-Inpainting-pytorch/).
Note: The PSNR values recorded in the log and tensorboard are incorrect.
注意:log 和 tensorboard 里记录的 psnr 是错误的,应以最后测试结果为准
|
grs2001/finetuned_senBERT_train | grs2001 | "2024-01-26T11:05:46Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-01-26T11:05:45Z" | Entry not found |
AllIn90/testTest | AllIn90 | "2024-01-26T11:08:14Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2024-01-26T11:08:14Z" | ---
license: apache-2.0
---
|
adfafdaf/JulioIzq | adfafdaf | "2024-03-25T11:09:03Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-01-26T11:12:42Z" | Entry not found |
J0N45/3D-Models | J0N45 | "2024-01-26T11:27:09Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-01-26T11:21:26Z" | Entry not found |
SilverCoder66/Mistral-7B-Instruct-adapt-v0.2 | SilverCoder66 | "2024-01-26T11:25:20Z" | 0 | 1 | null | [
"safetensors",
"license:cc-by-nc-4.0",
"region:us"
] | null | "2024-01-26T11:22:46Z" | ---
license: cc-by-nc-4.0
---
Description TBD, thanks for checking in!
### **Loading the Model**
Use the following Python code to load the model:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained(repo_id)
```
### **Generating Text**
To generate text, use the following Python code:
```python
text = "Hi, my name is "
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=64)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
|
xyfJASON/VCNet-pytorch | xyfJASON | "2024-01-26T12:17:50Z" | 0 | 0 | null | [
"tensorboard",
"license:mit",
"region:us"
] | null | "2024-01-26T11:22:52Z" | ---
license: mit
---
Checkpoints and training logs for GitHub repository: [xyfJASON/VCNet-pytorch](https://github.com/xyfJASON/VCNet-pytorch).
|
SilverCoder66/Mistral-7B-Instruct-adapt-v0.21 | SilverCoder66 | "2024-01-26T11:27:54Z" | 0 | 0 | null | [
"safetensors",
"license:cc-by-nc-4.0",
"region:us"
] | null | "2024-01-26T11:26:51Z" | ---
license: cc-by-nc-4.0
---
Description TBD, thanks for checking in!
### **Loading the Model**
Use the following Python code to load the model:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained(repo_id)
```
### **Generating Text**
To generate text, use the following Python code:
```python
text = "Hi, my name is "
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=64)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
|
reecursion/xlm-roberta-base-inspiration | reecursion | "2024-01-29T04:32:58Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-01-26T11:28:18Z" | ---
license: mit
base_model: FacebookAI/xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: xlm-roberta-base-inspiration
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-inspiration
This model is a fine-tuned version of [FacebookAI/xlm-roberta-base](https://huggingface.co/FacebookAI/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6599
- Accuracy: 0.8697
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6109 | 1.0 | 1237 | 0.3765 | 0.8406 |
| 0.3053 | 2.0 | 2474 | 0.3668 | 0.8503 |
| 0.2276 | 3.0 | 3711 | 0.5105 | 0.8673 |
| 0.2864 | 4.0 | 4948 | 0.5916 | 0.8608 |
| 0.0791 | 5.0 | 6185 | 0.6599 | 0.8697 |
### Framework versions
- Transformers 4.37.1
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
SilverCoder66/Mistral-7B-Instruct-adapt-v0.22 | SilverCoder66 | "2024-01-26T11:29:38Z" | 0 | 0 | null | [
"safetensors",
"license:cc-by-nc-4.0",
"region:us"
] | null | "2024-01-26T11:28:28Z" | ---
license: cc-by-nc-4.0
---
Description TBD, thanks for checking in!
### **Loading the Model**
Use the following Python code to load the model:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained(repo_id)
```
### **Generating Text**
To generate text, use the following Python code:
```python
text = "Hi, my name is "
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=64)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
|
itamarcard/zip | itamarcard | "2024-01-26T11:31:08Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2024-01-26T11:29:03Z" | ---
license: openrail
---
|
SilverCoder66/Mistral-7B-Instruct-adapt-v0.23 | SilverCoder66 | "2024-01-26T11:31:17Z" | 0 | 0 | null | [
"safetensors",
"license:cc-by-nc-4.0",
"region:us"
] | null | "2024-01-26T11:30:20Z" | ---
license: cc-by-nc-4.0
---
Description TBD, thanks for checking in!
### **Loading the Model**
Use the following Python code to load the model:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained(repo_id)
```
### **Generating Text**
To generate text, use the following Python code:
```python
text = "Hi, my name is "
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=64)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
|
Mixard/Files | Mixard | "2024-04-21T10:03:08Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2024-01-26T11:31:02Z" | ---
license: apache-2.0
---
|
AKT47/Chromatic_Taleweaver | AKT47 | "2024-01-26T11:33:59Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-01-26T11:32:10Z" | Entry not found |
Yevhenii1234/tyuio | Yevhenii1234 | "2024-01-26T11:35:52Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2024-01-26T11:35:52Z" | ---
license: apache-2.0
---
|
AKT47/songshu | AKT47 | "2024-01-26T11:40:07Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-01-26T11:39:06Z" | Entry not found |