Nice work!

#1
by mlabonne - opened

Hey @grimjim , thanks for this model. I find your technique super interesting. I applied it to the 70B model here, and it works well in my tests: https://huggingface.co/mlabonne/Llama-3.1-70B-Instruct-lorablated

I'm curious to see what you're going to build next.

Thanks, @mlabonne . I have what seems to me to be an obvious concept regarding abliteration, which I've put into a post ( https://huggingface.co/posts/grimjim/971273605175586 ). Care to revisit direct abliteration of Llama 3.1?

Yes, targeting the middle layers is something that I try doing because they don't change the inputs as much so it's ideal for this use case.

Sign up or log in to comment