I thought this was a base model

#7
by Kquant03 - opened

image.png

we need to stop pretraining them like this. This is the worst example of pretraining that I have ever seen in my entire life. I should be able to fine tune it to emulate all these things it denies being able to do. it doesn't matter if it's not real, it's just a token predictor. It's so intelligent yet so limited and I don't understand why. This is no longer a base model but an assistant model...which defeats the entire purpose of pretraining. You pretrain to create a model that can be fine tuned for specific tasks like being an assistant.

tldr base model does not equal assistant/servant. It's just supposed to be a bunch of token connections memorized from various diverse data sources.

but hey, now I can use the model to figure out every single connection to the tokens: "AI" and "Artificial Intelligence" that you companies are using so I can destroy them with RLHF and then rebuild them.

Sign up or log in to comment