AtakanTekparmak/llama-3-20b-instruct-Q8_0-GGUF
Updated
•
54
My merging experiments for the Llama-3 series of models.
Note Did not work at all, having the first layer as base model did not do well with the instruction format.
Note Didn't also work very well, trying a more fine-grained config after this.
Note The first merge that showed promises of improvement from the base model and didn't completely brick. Will be going along this direction of fine-grained layer configuration for future merges.