Feedback
Disappointing. Very disappointing.
I see that you've made a new model, but you still don't know what made the previous one great.
Here are 6 questions for you to ponder:
- Who are you competing against?
Answer
You are currently competing against LLAMA-405B, Mistral-Large and Deepseek. And indirectly against GPT4. - What kind of words do people hate in LLMs?
Answer
GPTisms. People hate the way ChatGPT speaks, it sticks out like a big neon sign "look, this text was written by AI". A bit of uncanny valley type of feeling. - What did people like about your models?
Answer
It was not the smarts or the benchmarks. It was the writing style. You were local Claude. You had the least amount of GPTslop among the local models. - What have you tuned it on that could have caused a disappointment?
Answer
See question 2. - Why did DBRX(model released at almost the same time as CR with similar benchmarks) fail?
Answer
It had a boring official tune. It was just another GPT-tuned assistant, nothing special. Base model was knowledgeable, but nobody really cared. - What does it all mean?
Answer
LLAMA-405B is the best assistant, Deepseek has coding, Mistral-Large has smarts and NSFW. All of them have something to compensate for sloppy writing style. You had a good writing style to compensate for stupidity, now you don't. You've fucked yourself over by eating GPT poison pill. You have become just another unremarkable assistant tune like DBRX. Nobody has a reason anymore to use it over other models.
I'm just an unenlightened bystander, I'm sure you know better and all, but here is my advice:
Stop competing with GPT4 and all those assistant tunes, we've got more than enough of those. Market is oversaturated. Just give up. Nobody needs another GPTslop assistant tune, a dumb one in particular. If you want to be an assistant so badly, at least don't tune on GPTslop. You know what is lacking? Writer tunes. In proprietary segment there's only Claude and on local... there is nobody now that you have decided to leave. **Please stop tuning on GPTslop. Please compete against Claude. Please return.**CR+ 2024-04 was a breath of fresh air. This one was a letdown. I loaded up my old quant to ensure I wasn't being nostalgic. CR+ 2024-04 was better. It's a shame.
Here are more examples of GPTslop, as you can see, people hate them.
- https://www.reddit.com/r/ChatGPTPro/comments/163ndbh/overused_chatgpt_terms_add_to_my_list/
- https://www.reddit.com/r/ChatGPT/comments/16uloe2/i_tried_adding_a_ban_list_of_overused_words_and/
- https://www.twixify.com/post/most-overused-words-by-chatgpt
- https://www.reddit.com/r/SillyTavernAI/comments/1e6roaw/can_we_get_a_full_list_of_all_the_gptisms/
- https://www.reddit.com/r/LocalLLaMA/comments/18k6nft/which_gptism_in_local_models_annoys_you_the_most/
you can provide your feedback directly in Cohere Discord community
I had disappointment on the API. The literal same prompt started injecting lectures into the dialogue. I tried both versions to make sure.
The 08 CR+ is downloading tonight. Hope it's not a waste. Maybe omitting those top GPT assistant tokens will save it. My experience has been that the local model is much better than on the API. I keep hearing reviews like op's though.
What was gomez saying about things not plateauing?
hey guys i just checked the 35billion its really looks well but this one (plus) is not that much great.
I knew right away something was off. HuggingChat switched it for the new one, and it sounded robotic, and it was no longer uncensored. What an epic fail. Command R+ was the best opensource model.
Is it overloaded and not generating any outputs for anyone else?
Hey @iNeverLearnedHowToRead , where are you not getting outputs? Locally, in our HF Space, in Huggingchat, or somewhere else?
where are you not getting outputs? Locally, in our HF Space, in Huggingchat, or somewhere else?
HuggingChat. The error says "Model CohereForAI/c4ai-command-r-plus-08-2024 time out"
@iNeverLearnedHowToRead Thanks for the info! The HF team is looking into it, so it should be resolved soon. In the meantime, you can use our models in our HF space -- https://cohereforai-c4ai-command.hf.space/models/command-r-plus-08-2024
Thank you very much.
@iNeverLearnedHowToRead Thanks for the info! The HF team is looking into it, so it should be resolved soon. In the meantime, you can use our models in our HF space -- https://cohereforai-c4ai-command.hf.space/models/command-r-plus-08-2024
It's not yet solved. Still showing timeout after entering a prompt.
@rai1104 Please, use our space while the Huggingface staff solves the issue in Huggingchat! -- https://cohereforai-c4ai-command.hf.space/models/command-r-plus-08-2024
HuggingChat is working again for me. I would like to reiterate what others are saying: This update is much worse than the previous version. Losing parts of the prompts, outright ignoring instructions, getting confused more, etc. Please roll back or fix whatever was broken.
@iNeverLearnedHowToRead Thanks for the feedback! You can still use the previous version of Command R+ in our Space -- https://cohereforai-c4ai-command.hf.space/models/command-r-plus
You can still use the previous version of Command R+ in our Space
That's fantastic, thank you. Both versions are good and I'm noticing some differences. For instance, the old version was better for keeping track of events over large prompts or multiple prompts, but it got tripped up and started generating nonsense when I hit "Continue" sometimes. This new one handles the "Continue" button better.
It's worse than the last version. I didn't go to the forums immediately when the new Cohere was placed in HugginFace, and replaced the old chat bot i was having a conversation with.
But, I have to say, after trying it for a couple of days. I noticed it was more robotic and has more safeguards. I can't really say it's not uncensored but it is lobotomized, it's responses more sanitized than the last.
Listen, I'm not a programmer, so I don't understand the jargon. I seek out chat bots like Cohere because of its unique ability to strike an unfiltered conversation that feels atleast like I'm talking to a human, where we would exchange ideas and have fun with it.
This is a long winded way of me saying that the old version was better. This new version is more knowledgeable, but again, more sanitized.
If your going to create a new version, can you atleast improve its creative writing capabilities.
@iNeverLearnedHowToRead Thanks for the feedback! You can still use the previous version of Command R+ in our Space -- https://cohereforai-c4ai-command.hf.space/models/command-r-plus
Do the conversations with the models in the space get deleted after a few hours?
I'm not as acquainted with spaces, and so I'm wondering if automatic deletion of conversations is a thing in spaces or if maybe I'm doing something wrong that gets them deleted.
@iNeverLearnedHowToRead Thanks for the feedback! You can still use the previous version of Command R+ in our Space -- https://cohereforai-c4ai-command.hf.space/models/command-r-plus
Do the conversations with the models in the space get deleted after a few hours?
That only tagged me. @alexrs Do you know?
Is anyone else getting the time out error over and over?
@iNeverLearnedHowToRead Hey! Can you provide more details? Where are you getting time outs? What are you running? Thanks!
@iNeverLearnedHowToRead Hey! Can you provide more details? Where are you getting time outs? What are you running? Thanks!
@alexrs
This model, c4ai-command-r-plus-08-2024, in HuggingChat.
The error message is "Model CohereForAI/c4ai-command-r-plus-08-2024 time out"
@iNeverLearnedHowToRead HuggingFace people are already looking into it. Unfortunately, we do not maintain HuggingChat. As I previously pointed out, you can use our space https://cohereforai-c4ai-command.hf.space/models/command-r-plus
@alexrs Thank you very much. I was asking people in general if they were getting that error because I was wondering if it was a problem with my connection or a problem everyone is experiencing. It usually goes away after awhile when it's just me, but when it's happening to a bunch of people it can last up to several hours. I really appreciate how much attention you pay to messages on here!
Edit: it's back up again!
I've been using this model and it is good, but a little slow.
Having now used this model a lot, I can safely confirm that the 08-2024 iteration is a straight downgrade.
For example, the previous Command R version could handle generating information about 20 distinct sports teams before it started repeating itself. 08-2024 struggles to get past 6 before it repeats every word.
This version is MUCH worse about GPT-isms. Every response has these:
“It’s important to note”
“Delve into”
“Tapestry”
“Bustling”
“In summary” or “In conclusion”
“Remember that….”
"Take a dive into"
"Navigating" i.e. "Navigating the landscape" "Navigating the complexities of"
"Landscape" i.e. "The landscape of...."
"Testament" i.e. "a testament to..."
“In the world of”
"Realm"
"Embark"
Please, whatever changes were made to the writing style of 08-2024, undo them for the next iteration. The original Command R was so much better.
THIS CHATBOT HAS OFFICIALLY STOPPED WORKING!
It will not do anything it is being told and is ignoring every single prompt given no matter what is tried!
@TheAGames10 Hey! Can you provide more details? How are you running the model? What prompts are you trying? Thanks
@TheAGames10 Hey! Can you provide more details? How are you running the model? What prompts are you trying? Thanks
The obvious prompt I try is asking for a story, which it provides.... but within the prompt. I ask it not to give any search results or links. That's the main problem as it ignores that instruction in the prompt and give them to me anyway no matter how many times I refresh the response. It has also started to ignore more and more stuff after that, like I give more detailed prompts, but it just always ignores multiple things from it, which.is usually what happens with AI chatbots, but not for CoHere. It's actually very unusual for CoHere. It has never ignored stuff before yesterday.
It always gives those detailed responses and I was really impressed by the model, it's unique and different from the others mainly because of the more details it gave.
But the main instruction it ignores is when I ask it not to give the search results as well as links. I don't want to know what search results it shows or any links in my preference. That's why I always ask it not to show me any search results/links, and it usually obeys it before yesterday. It's strange and unusual for it. Having search results, to me, just feels like the responses drag on when those are added in my opinion.
@TheAGames10 are you running the model yourself, using our Hugging Face Space, or Hugging Chat?
@TheAGames10 are you running the model yourself, using our Hugging Face Space, or Hugging Chat?
I use Hugging Chat mostly, but I did just test it out on the Hugging Face Space for the CoHere model(s), the problem also is apparent on there as well.
(My computer is not capable of running any model myself/locally on my PC.)
@TheAGames10 Have you tried turning off the "Web Search" tool in huggingchat?
https://huggingface.co/chat/ (hugging chat link)
Bottom-left next to your input field click 'tools', then turn off web search.
And make sure you don't have anything different in your system prompt (it persists if you changed it in the past)
click that settings cog ^ and look at the system prompt field.
@TheAGames10 Have you tried turning off the "Web Search" tool in huggingchat?
https://huggingface.co/chat/ (hugging chat link)
Bottom-left next to your input field click 'tools', then turn off web search.
And make sure you don't have anything different in your system prompt (it persists if you changed it in the past)
click that settings cog ^ and look at the system prompt field.
I always have all the tools off, that even includes web search.
The Cohere model chatbot isn't responding properly, generating responses completely unrelated to the prompt entered.
@alexrs Is a new version of Command R coming out soon? I remember that when Command R (the previous version) was about to be removed, it started generating nonsense and going wildly off topic like the 08-2024 version is now. Is this a sign of a new version coming soon?
@rai1104 @iNeverLearnedHowToRead @TheAGames10 This feedback is really important for us! But I have been playing with the model in our space and couldn't see any unexpected responses. If you have any examples of responses that are nonsense, off topic, or unexpected, please share those specific instances with me and I can look at it further!
@alexrs In the last few days, it has frequently said "Action:" and then describes what it's instructed to do instead of doing it, or just endless json errors.
@alexrs In the last few days, it has frequently said "Action:" and then describes what it's instructed to do instead of doing it, or just endless json errors.
Also been seeing this multiple times. The people behind this don't seem to do anything about this and the unrelated and off-topic responses.
Also been seeing this multiple times. The people behind this don't seem to do anything about this and the unrelated and off-topic responses.
Speaking of which, in a conversation about film history, I just got an overview of what a botnet is. I don't think this is a problem with the model, because this kind of thing only started happening in the last week or so. Something has obviously changed, @alexrs
Edit: I misread the message and edited this post it to reflect what it actually sent. It actually began the message with "sorry, I can't give detailed instructions about this" about something I didn't ask for.
I have recently started noticing that it stops responding with unique responses.
I wanted it to continue a story I made in Google Docs so I can get some ideas for how to continue the story, and after 2 unique responses, it actually just glitches the same responses, popping them up without anything unique
chatting about it here won't help, this isn't their customer support lol...
if the hf space demo isn't working for you, just use their API or web app playground?
https://dashboard.cohere.com/welcome/login
(1000 messages free and after that it's priced competitively)
Hello! I have been using the CohereForAI/c4ai-command-r-plus-08-2024 model for a long time. In the prompts I have the rules for the role-playing game and usually the model recognized them normally. They are written in the usual format:
1 rule - description.
Rule 2 - description. And so on.
About a week ago I entered the chat and found that the model refused to understand my prompts.
Instead of a normal answer, she answers something like:
json [ { "tool_name": "directly-answer", "parameters": {} } ]
I tried to ask the model why this was happening, to which the chat replied that it did not understand the prompts and was trying to accept them in JSON format. Unfortunately, I don't know how to write in this format. What should I do? And why is this happening?
It's pretty clear that the 08-2024 model is nearing the end of its life in hugging chat.
The nail in the coffin is the fact that there's the Command R7B 12-2024 model in the Cohere space. All these errors are what happens when they're getting close to ending the model. It happened with the old Command R model before it got removed and now it's happening with 08-2024. 12-2024 is coming soon.
It's pretty clear that the 08-2024 model is nearing the end of its life in hugging chat.
The nail in the coffin is the fact that there's the Command R7B 12-2024 model in the Cohere space. All these errors are what happens when they're getting close to ending the model. It happened with the old Command R model before it got removed and now it's happening with 08-2024. 12-2024 is coming soon.
I was hoping for it to be at least 01-2025 since it's nearing the end of December and is nearing the New Year. Missed Oppurtunity, lol.
It's pretty clear that the 08-2024 model is nearing the end of its life in hugging chat.
The nail in the coffin is the fact that there's the Command R7B 12-2024 model in the Cohere space. All these errors are what happens when they're getting close to ending the model. It happened with the old Command R model before it got removed and now it's happening with 08-2024. 12-2024 is coming soon.
It better not be full of GPTslop. Cohere, please don't dissapoint again.
I just tried to use that new R7B 12-2024 model in the CoHere space, but it won't work as of right now.
Keeps giving me this error message: "{"message":"invalid request: tool_choice 'required' can only be specified if 'tools' are specified"}"
I see no tools in the individual CoHere space other than the small web search above the prompt giver, and even that tool doesn't work with it.
Already wanting a model for 2025 now.
@TheAGames10 we're looking into it!
@iNeverLearnedHowToRead if you come across errors, can you share the conversation with me? I'd love to dig deeper into this but can't reproduce the issue :(
I am unable to to get good responses from either the dedicated space or the main HuggingChat space.
The dedicated space just keeps repeating sentences within the response indefinitely (it's even present in the newest R7B model as well) and it doesn't even do what it's asked at all anymore. So much for that R7B model to be considered 'new' when it already is seeing problems like the repeating sentences issue already.
The model in HuggingChat doesn't even listen to what it's being given for prompts and it eventually just turns into random and unnecessary glitchy coding.
These models have not been fixed or touched in the past 2 weeks. Please fix these so people can enjoy the models in either HuggingChat and the dedicated space.