phi3-4x4b-v1 / README.md
Fizzarolli's picture
Update README.md
277af36 verified
|
raw
history blame
914 Bytes
metadata
license: mit
tags:
  - phi3
  - nlp
  - moe
datasets:
  - BEE-spoke-data/gutenberg-en-v1-clean
  - NeelNanda/pile-10k

phi 3 4x4b

a continually pretrained phi3-mini sparse moe upcycle

support me on ko-fi!

please i need money to stay alive and keep making models

notes

not trained on instruct data. it's pretty likely that it won't be much different from phi 3 if you use it like that, if not worse due to any forgetting of instruct formats during the continued training.

future experiments

  • the datasets for this were literally chosen on a whim. perhaps experiment with a further filtered HuggingFaceFW/fineweb-edu?
  • actually freeze the gate layers next time (see Chen et. al, 2023), oops
  • MOAR TRAINING, this only went up to ~0.2 of an epoch because i ran out of dolar