YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
Quantization made by Richard Erkhov.
pythia-31m-simplewiki-scratch-bf16 - bnb 4bits
- Model creator: https://huggingface.co/pszemraj/
- Original model: https://huggingface.co/pszemraj/pythia-31m-simplewiki-scratch-bf16/
Original model description:
tags: - generated_from_trainer metrics: - accuracy inference: parameters: max_new_tokens: 64 do_sample: true repetition_penalty: 1.1 no_repeat_ngram_size: 5 guidance_scale: 1.01 eta_cutoff: 0.001 widget: - text: My name is El Microondas the Wise and example_title: El Microondas - text: A meme is example_title: meme - text: >- Barack Obama nominated Hilary Clinton as his secretary of state on Monday. He chose her because she had example_title: Coreference resolution - text: >- On a shelf, there are five books: a gray book, a red book, a purple book, a blue book, and a black book example_title: Logic puzzles - text: >- The two men running to become New York City's next mayor will face off in their first debate Wednesday night example_title: Reading comprehension license: apache-2.0 datasets: - pszemraj/simple_wikipedia_LM pipeline_tag: text-generation
pythia-31m-simplewiki-scratch-bf16
Trained from random initialized config based on EleutherAI/pythia-31m, 3 epochs bf16 It achieves the following results on the evaluation set:
- Loss: 4.1763
- Accuracy: 0.3676
Model description
tuned with bf16 (previous was fp32)
Intended uses & limitations
More information needed
Training and evaluation data
***** eval metrics *****
epoch = 2.99
eval_accuracy = 0.3723 eval_loss = 4.1155
eval_runtime = 0:00:14.44
eval_samples = 500 eval_samples_per_second = 34.602 eval_steps_per_second = 17.301
perplexity = 61.2811
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2
- eval_batch_size: 2
- seed: 80085
- gradient_accumulation_steps: 64
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.99) and epsilon=1e-07
- lr_scheduler_type: inverse_sqrt
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 3.0
Training results
Training Loss | Epoch | Step | Validation Loss | Accuracy |
---|---|---|---|---|
5.8617 | 0.45 | 100 | 5.5276 | 0.2451 |
5.2782 | 0.9 | 200 | 4.9596 | 0.2965 |
4.9996 | 1.35 | 300 | 4.6412 | 0.3310 |
4.6292 | 1.8 | 400 | 4.4344 | 0.3485 |
4.5339 | 2.25 | 500 | 4.2875 | 0.3600 |
4.5214 | 2.7 | 600 | 4.1763 | 0.3676 |
Framework versions
- Transformers 4.33.1
- Pytorch 2.2.0.dev20230907+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 24.63 |
ARC (25-shot) | 22.78 |
HellaSwag (10-shot) | 25.61 |
MMLU (5-shot) | 23.12 |
TruthfulQA (0-shot) | 49.65 |
Winogrande (5-shot) | 50.51 |
GSM8K (5-shot) | 0.0 |
DROP (3-shot) | 0.72 |
- Downloads last month
- 2
This model does not have enough activity to be deployed to Inference API (serverless) yet.
Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.