--- tags: - autotrain - text-generation base_model: ahxt/llama2_xs_460M_experimental datasets: - KnutJaegersberg/WizardLM_evol_instruct_V2_196k_instruct_format widget: - text: |- ### Instruction: Find me a list of some nice places to visit around the world. ### Response: - text: |- ### Instruction: Tell me all you know about the Earth. ### Response: inference: parameters: max_new_tokens: 32 repetition_penalty: 1.15 do_sample: true temperature: 0.5 top_p: 0.5 --- # ahxt's llama2_xs_460M_experimental trained on the WizardLM's Evol Instruct dataset using AutoTrain - Base model: [ahxt/llama2_xs_460M_experimental](https://huggingface.co/ahxt/llama2_xs_460M_experimental) - Dataset: [KnutJaegersberg/WizardLM_evol_instruct_V2_196k_instruct_format](https://huggingface.co/datasets/KnutJaegersberg/WizardLM_evol_instruct_V2_196k_instruct_format) - Training: 13.5h under [these parameters](https://huggingface.co/Felladrin/llama2_xs_460M_experimental_evol_instruct/blob/cc151c5669ea37c3ef972e375c74f2d9bfd92b49/training_params.json) ## Recommended Prompt Format ``` ### Instruction: ### Response: ``` ## Recommended Inference Parameters ```yml repetition_penalty: 1.15 do_sample: true temperature: 0.5 top_p: 0.5 ```