Edit model card

Experiment Objectives

  1. Is Training with Korean + Multi-lingual dataset helpful to perform Korean benchmarks?
  2. Does Full Parameter Depth-Up Scaled Training (expansion method: Llama-Pro) help to perform the best Korean benchmark performance?

Methods

  1. Training CJK + En + Glot dataset with the same ratio of data size.
  2. Layer Expansion and full parameter training.
Downloads last month
4
Unable to determine this model's library. Check the docs .