Spaces:
Running
[MODELS] Discussion
what are limits of using these? how many api calls can i send them per month?
How can I know which model am using
Out of all these models, Gemma, which was recently released, has the newest information about .NET. However, I don't know which one has the most accurate answers regarding coding
Gemma seems really biased. With web search on, it says that it doesn't have access to recent information asking it almost anything about recent events. But when I ask it about recent events with Google, I get responses with the recent events.
apparently gemma cannot code?
Gemma is just like Google's Gemini series models, it have a very strong moral limit put on, any operation that may related to file operation, access that might be deep, would be censored and refused to reply.
So even there are solution for such things in its training data, it will just be filtered and ignored.
But still didn't test the coding accuracy that doesn't related to these kind of "dangerous" operations
I think if someone is smart enough to find hugging chat and sign up, then they are ready to handle Llamaโs responses. I believe Meta writes this in terms of use, to protect themselves and not because itโs some deal breaker if you have 17 yo not 18 yo.
No, one must comply with the terms.
Bro...
Hi community,
whenever I use Qwen2.5-Coder-32B-Instruct in HuggingChat and it shares me a code snippet, it seems they are rendered with HTML entities, hence are interpreted as so. This makes all code snippets with a โ>โ instead of greater than sign โ>โ. I didn't see this happening when using Llama 3.3 though.
Any idea on how to solve it?
Thanks a lot for any help!
Cheers.
Hello! I have been using the CohereForAI/c4ai-command-r-plus-08-2024 model for a long time. In the prompts I have the rules for the role-playing game and usually the model recognized them normally. They are written in the usual format:
1 rule - description.
Rule 2 - description. And so on.
About a week ago I entered the chat and found that the model refused to understand my prompts.
Instead of a normal answer, she answers something like:
json [ { "tool_name": "directly-answer", "parameters": {} } ]
I tried to ask the model why this was happening, to which the chat replied that it did not understand the prompts and was trying to accept them in JSON format. Unfortunately, I don't know how to write in this format. What should I do? And why is this happening?
Honestly I'm surprised we don't have Mistral Large on here. It's available on Le Chat but the UI over there is meh imo, plus no top k or top p settings to play with.
๐ค๐
@DarkCesare glitches have been less frequent but they still haunt me
DEVS FIX EM AGAIN IF YOU HAVE DONE IT ALREADY
It's pretty apparent at this point that this kind of model behavior is intentional for some reasons as fixing meant this ๐ instead of how good it was initially.
The list is quite long as the ui was unresponsive in my browser due to the bug to stop generation.
& Why retry generates bug variations but doesn't work before that?