Text Generation
Transformers
PyTorch
llama
Not-For-All-Audiences
nsfw
text-generation-inference
Inference Endpoints
metadata
license: cc-by-nc-4.0
tags:
- not-for-all-audiences
- nsfw
An attempt using BlockMerge_Gradient on Pygmalion2 to get better result.
In addition, LimaRP v3 was used, is it recommanded to read the documentation.
Description
This repo contains fp16 files of Emerald-13B.
Models and loras used
- PygmalionAI/pygmalion-2-13b
- The-Face-Of-Goonery/Huginn-13b-FP16
- lemonilia/LimaRP-Llama2-13B-v3-EXPERIMENT
Prompt template: Alpaca
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
LimaRP v3 usage and suggested settings
You can follow these instruction format settings in SillyTavern. Replace tiny with your desired response length:
Special thanks to Sushi.
If you want to support me, you can here.
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 51.39 |
ARC (25-shot) | 62.29 |
HellaSwag (10-shot) | 83.69 |
MMLU (5-shot) | 55.7 |
TruthfulQA (0-shot) | 50.94 |
Winogrande (5-shot) | 75.93 |
GSM8K (5-shot) | 12.81 |
DROP (3-shot) | 18.38 |