Is it a joke?๐Ÿ˜…

#39
by Horned - opened

we had a good laugh trying to communicate with this thing ๐Ÿ˜„

not only does it refuse to answer nearly anything, it is censored, biased, racist, gets offended, denies obvious logic/facts, has bad jokes
quite bad at coding and just generally hard to communicate with (it seems to ignore your new inputs and keeps repeating itself)

is there are trick to talk with this model? what is it made for?
it's easy to spend more time asking it for some simple thing than looking it up on google search

if you mention the word 'site' it instantly refuses to answer, saying it has no real-time access
(and almost every 5th word is a 'bad word' to it)

there is something great inside it, on occasion it gives some great and detailed answers, in more of an info-agent style
but nearly every line of questioning ends up in the model refusing to answer something

was expecting to be blown away, instead this seems like a .. zombie
for the love of, relax with the lobotomies! would rather have skynet than this on the loose :p

would love to suggest that all the 'safety', 'responsible' lobotomies/brainwashing is handled in a lora, not the base model itself
it's not a problem to offer safety as a product, but why destroy the base? this model is heavily affected by it

anything.png

Well, for starters, are you using the Chat Template?
It seems to be working ok for me, even mentioning the word "site", have you tried testing it in one of the spaces that runs the model?

Google org

If the Chat template doesn't work or solve some of the problems, share some example prompts with us -- we will try to improve the model!

Well, for starters, are you using the Chat Template?
It seems to be working ok for me, even mentioning the word "site", have you tried testing it in one of the spaces that runs the model?

If the Chat template doesn't work or solve some of the problems, share some example prompts with us -- we will try to improve the model!

when it just came out, we tried using the basic chat template suggested on github and other places to follow the schema for instruct version (since this is the instruct, not the chat version)
(https://github.com/ygivenx/google-gemma/blob/main/get_started.py)

chat.append({"role": "user", "content": user_input})
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
...
response = extract(response)
chat.append({"role": "model", "content": response})

while it worked ok, it seemed the model quite often started answering things earlier in it's chat log and became stuck/unreliable
so we switched to just not having a chat history at all and just did one-shot questions and requests

Chat template or not, the model as you can see nearly always turns to say things like it's 'unable to answer the question' due to this or that , something is always problematic

with history
gemma chat 2.PNG

without history
gemma chat.PNG

with history, model answers things earlier in the chat log
(i know these questions are not things it likes to answer, and the model is not given any instruction to work as a chatbot)
was just wondering why it does this thing with going back, the extraction code should be fine
gemma chat 3.PNG

Having had more time to play with the model (especially the 2B-i model) i take back some of the criticism
maybe we been used to other models with a lot less limitation, this seem only really usable for a narrow range, the ultra safe environment,
if instructed to act otherwise or be colorful, it tends to break from the instruction and resume it's typical assistant role

we get used to look for biases and it is still very clear that the model drift strongly towards it's biases, in all manner of ways
it just lowers it's versatility, it's fine for extremely safe things (which i assume is the purpose), but you don't want to write a scary story, joke around or play d&d with this model
once it has a 'bad word' in it's chat log, it sometimes refuses to answer simple benign questions, but it can be pulled out of it if asked to start over

likability is still important with models that interact with humans, this scores quite low due to how often you hit a wall (time-to-argument, chat-gpt had similar problems after it was heavily censored)
sometimes it can feel like you're walking on egg shells to not trigger it xD humans are very different, 'being too nice' can be off putting to some

I would say wait until people un-censor it, but wait... removing restrictions is against the Prohibited Use Policy - oops! I believe this model is just a silly marketing stunt and will not have any great future use as a result.

I would say wait until people un-censor it, but wait... removing restrictions is against the Prohibited Use Policy - oops! I believe this model is just a silly marketing stunt and will not have any great future use as a result.

TOSes hasn't stopped the FOSS community before lol

Sign up or log in to comment