--- license: apache-2.0 --- Meta's Llama-3 8B parameter base model trained on Alpaca dataset and outputed to 16bit GGUF Instruct model. Below is code for inference at command line using llama.cpp ``` ./build/bin/main -m ./models/llama3_alpaca_dpo_GGUF-unsloth.F16.gguf \ -p '''Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.\n\n ### Instruction:\nWhy is the sky blue?\n\n ### Input:\n\n\n ### Response:\n''' ```