Model Card
Model Description
This is a Large Language Model (LLM) trained on a dataset of DIBT/10k_prompts_ranked.
Evaluation Results
Hellaswag
Passed argument batch_size = auto:4.0. Detecting largest batch size Determined largest batch size: 64 Passed argument batch_size = auto:4.0. Detecting largest batch size Determined largest batch size: 64 hf (pretrained=EleutherAI/pythia-160m,revision=step100000,dtype=float), gen_kwargs: (None), limit: None, num_fewshot: None, batch_size: auto:4 (64,64,64,64,64)
Tasks | Version | Filter | n-shot | Metric | Value | Stderr | ||
---|---|---|---|---|---|---|---|---|
hellaswag | 1 | none | 0 | acc | ↑ | 0.2872 | ± | 0.0045 |
none | 0 | acc_norm | ↑ | 0.3082 | ± | 0.0046 |
How to Use
To use this model, simply download the checkpoint and load it into your preferred deep learning framework.
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
HF Inference deployability: The model has no library tag.
Model tree for pavel-tolstyko/pavel_tolstyko
Base model
TinyLlama/TinyLlama-1.1B-Chat-v1.0