nvidia/Llama-Nemotron-Post-Training-Dataset Viewer β’ Updated 6 days ago β’ 3.91M β’ 6.45k β’ 421
view post Post 4646 You can now run Llama 4 on your own local device! π¦Run our Dynamic 1.78-bit and 2.71-bit Llama 4 GGUFs: unsloth/Llama-4-Scout-17B-16E-Instruct-GGUFYou can run them on llama.cpp and other inference engines. See our guide here: https://docs.unsloth.ai/basics/tutorial-how-to-run-and-fine-tune-llama-4 See translation 1 reply Β· π€ 14 14 π₯ 10 10 β€οΈ 6 6 π 6 6 + Reply
unsloth/Llama-4-Maverick-17B-128E-Instruct-GGUF Image-Text-to-Text β’ Updated 11 days ago β’ 27.6k β’ 16
Running on CPU Upgrade 191 191 Open Portuguese LLM Leaderboard π Track, rank and evaluate open LLMs in Portuguese
view post Post 3347 You can now run DeepSeek-V3-0324 on your own local device!Run our Dynamic 2.42 and 2.71-bit DeepSeek GGUFs: unsloth/DeepSeek-V3-0324-GGUFYou can run them on llama.cpp and other inference engines. See our guide here: https://docs.unsloth.ai/basics/tutorial-how-to-run-deepseek-v3-0324-locally See translation π₯ 16 16 β€οΈ 8 8 π 4 4 + Reply
unsloth/Llama-4-Maverick-17B-128E-Instruct-FP8 Image-Text-to-Text β’ Updated 14 days ago β’ 745 β’ 6