Installation Steps
- install ollama
- run ollama (keep it running in background)
- clone this repo
- open dir
cd myllama3_gguf
- run
ollama create myllama3 --file .\ModelFile.txt
Uploaded model
- Developed by: sudhanshu-soft
- License: apache-2.0
- Finetuned from model : unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
- Downloads last month
- 1
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The model has no pipeline_tag.
Model tree for sudhanshu-soft/myllama3_gguf
Base model
meta-llama/Llama-3.1-8B
Finetuned
meta-llama/Llama-3.1-8B-Instruct