Context-awareness in instruction finetuning
Collection
15 items
•
Updated
This model is a fine-tuned version of meta-llama/Llama-2-7b-hf on the yihanwang617/vicuna_clean_processed_indicator_0.6_4k dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
0.7995 | 0.9998 | 1463 | 0.7582 |
Base model
meta-llama/Llama-2-7b-hf