Edit model card

license: apache-2.0 --- Model Architecture Llama 3 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.

Training Data Params Context length GQA Token count Knowledge cutoff Llama 3 A new mix of publicly available online data. 8B 8k Yes 15T+ March, 2023 70B 8k Yes December, 2023 Llama 3 family of models. Token counts refer to pretraining data only. Both the 8 and 70B versions use Grouped-Query Attention (GQA) for improved inference scalability. --- sft 1700 llama3 test, 25 EPOCH

Downloads last month
0
Safetensors
Model size
8.03B params
Tensor type
BF16
·
Unable to determine this model’s pipeline type. Check the docs .

Dataset used to train postitive666/llama3_ruozhiba_8b