Edit model card

8bpw 8h

Frostwind-v1

Frost1

A finetune of upstage/SOLAR-10.7B-v1.0
Took Roughly 3 Hours with 4x 4090s, over 2 Epochs, with around 52K varied samples.

Dataset Composition:
20% - Coding
30% - Instruct
30% - Generalised Data
10% - Roleplay
10% - Dealignment


Testing Notes:

Fairly smart, as I expected. Obviously not at the level of the bigger models, but I did not expect that level from this.

Could be sampler issues, but generally I needed 1/2 swipes to get the correct answer when doing Zero context tests. If context is filled, no issues on my end.

For Roleplays: adding things like avoid writing as {{user}} suprisingly helps. Plus a proper prompt of course. I liked the writing style. Handles group characters in 1 card well, during my tests.

Fairly uncensored during roleplay. Yeah the as an AI stuff can happen at Zero context, but I have no issues once a character card is introduced. I had no issues making outputs that would give me 2500 Life Sentences if posted here.


Trained with Alpaca Format:

### Instruction:
<Prompt>

### Response:

OR

### Instruction:
<Prompt>

### Input:
<Insert Context Here>

### Response:


wandb:
wandb: Run history:
wandb: eval/loss β–ˆβ–ƒβ–‚β–‚β–‚β–‚β–‚β–β–β–β–β–‚β–‚β–‚β–‚β–‚β–‚β–β–β–
wandb: eval/runtime β–ƒβ–‚β–ƒβ–‚β–ƒβ–‚β–‚β–ƒβ–β–ƒβ–ˆβ–‚β–ƒβ–ƒβ–ƒβ–‚β–ƒβ–ƒβ–‚β–‚
wandb: eval/samples_per_second β–†β–‡β–†β–‡β–†β–‡β–‡β–†β–ˆβ–†β–β–‡β–†β–†β–†β–‡β–†β–†β–‡β–‡
wandb: eval/steps_per_second β–†β–‡β–†β–‡β–†β–‡β–‡β–†β–ˆβ–†β–β–‡β–†β–†β–†β–‡β–†β–†β–‡β–‡
wandb: train/epoch β–β–β–β–‚β–‚β–‚β–‚β–‚β–‚β–ƒβ–ƒβ–ƒβ–ƒβ–ƒβ–„β–„β–„β–„β–„β–„β–…β–…β–…β–…β–…β–…β–†β–†β–†β–†β–†β–‡β–‡β–‡β–‡β–‡β–‡β–ˆβ–ˆβ–ˆ
wandb: train/global_step β–β–β–β–‚β–‚β–‚β–‚β–‚β–‚β–ƒβ–ƒβ–ƒβ–ƒβ–ƒβ–„β–„β–„β–„β–„β–„β–…β–…β–…β–…β–…β–…β–†β–†β–†β–†β–†β–‡β–‡β–‡β–‡β–‡β–‡β–ˆβ–ˆβ–ˆ
wandb: train/learning_rate β–„β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‡β–‡β–‡β–‡β–‡β–†β–†β–†β–†β–…β–…β–…β–…β–„β–„β–„β–ƒβ–ƒβ–ƒβ–ƒβ–‚β–‚β–‚β–‚β–‚β–β–β–β–β–β–β–
wandb: train/loss β–ˆβ–…β–…β–†β–…β–…β–„β–„β–„β–†β–†β–…β–†β–†β–†β–…β–„β–†β–…β–…β–…β–†β–„β–„β–ƒβ–„β–ƒβ–ƒβ–‚β–ƒβ–„β–‚β–‚β–ƒβ–ƒβ–‚β–β–‚β–‚β–‚
wandb:
wandb: Run summary:
wandb: eval/loss 0.74622
wandb: eval/runtime 72.5049
wandb: eval/samples_per_second 37.239
wandb: eval/steps_per_second 2.331
wandb: train/epoch 1.98
wandb: train/global_step 410
wandb: train/learning_rate 0.0
wandb: train/loss 0.6457
wandb: train/total_flos 3.4382652340646707e+18
wandb: train/train_loss 0.70204
wandb: train/train_runtime 10880.917
wandb: train/train_samples_per_second 9.417
wandb: train/train_steps_per_second 0.038
wandb:

Downloads last month
15
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.