metadata
library_name: transformers
license: llama3.1
datasets:
- QuasarResearch/apollo-preview-v0.4
language:
- en
pipeline_tag: text-generation
Model Card for Locutusque/Apollo-0.4-Llama-3.1-8B
Model Details
Model Description
Fine-tuned Llama-3.1-8B on QuasarResearch/apollo-preview-v0.4. Results in a decent roleplaying language model, that isn't dumb.
- Developed by: Locutusque
- Model type: Llama3.1
- Language(s) (NLP): English
- License: Llama 3.1 Community License Agreement
Model Sources [optional]
Direct Use
RP/ERP, instruction following, conversation, etc
Bias, Risks, and Limitations
This model is completely uncensored - use at your own risk.
Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
Training Details
Training Data
QuasarResearch/apollo-preview-v0.4
Training Hyperparameters
- Training regime: bf16 non-mixed precision
Evaluation
Tasks | Version | Filter | n-shot | Metric | Value | Stderr | ||
---|---|---|---|---|---|---|---|---|
agieval_nous | 0.0 | none | acc_norm | ↑ | 0.3833 | ± | 0.0094 | |
- agieval_aqua_rat | 1.0 | none | 0 | acc | ↑ | 0.2598 | ± | 0.0276 |
none | 0 | acc_norm | ↑ | 0.2323 | ± | 0.0265 | ||
- agieval_logiqa_en | 1.0 | none | 0 | acc | ↑ | 0.3825 | ± | 0.0191 |
none | 0 | acc_norm | ↑ | 0.3856 | ± | 0.0191 | ||
- agieval_lsat_ar | 1.0 | none | 0 | acc | ↑ | 0.1913 | ± | 0.0260 |
none | 0 | acc_norm | ↑ | 0.1826 | ± | 0.0255 | ||
- agieval_lsat_lr | 1.0 | none | 0 | acc | ↑ | 0.4216 | ± | 0.0219 |
none | 0 | acc_norm | ↑ | 0.4216 | ± | 0.0219 | ||
- agieval_lsat_rc | 1.0 | none | 0 | acc | ↑ | 0.5613 | ± | 0.0303 |
none | 0 | acc_norm | ↑ | 0.5056 | ± | 0.0305 | ||
- agieval_sat_en | 1.0 | none | 0 | acc | ↑ | 0.6553 | ± | 0.0332 |
none | 0 | acc_norm | ↑ | 0.6068 | ± | 0.0341 | ||
- agieval_sat_en_without_passage | 1.0 | none | 0 | acc | ↑ | 0.3835 | ± | 0.0340 |
none | 0 | acc_norm | ↑ | 0.3738 | ± | 0.0338 | ||
- agieval_sat_math | 1.0 | none | 0 | acc | ↑ | 0.4136 | ± | 0.0333 |
none | 0 | acc_norm | ↑ | 0.3227 | ± | 0.0316 | ||
arc_challenge | 1.0 | none | 0 | acc | ↑ | 0.5179 | ± | 0.0146 |
none | 0 | acc_norm | ↑ | 0.5580 | ± | 0.0145 | ||
eq_bench | 2.1 | none | 0 | eqbench | ↑ | 63.1277 | ± | 2.3905 |
none | 0 | percent_parseable | ↑ | 100.0000 | ± | 0.0000 | ||
truthfulqa_mc2 | 2.0 | none | 0 | acc | ↑ | 0.5310 | ± | 0.0150 |