Your feedback on HuggingChat

#1
by victor HF staff - opened
Hugging Chat org
edited Apr 25, 2023

Any constructive feedback is welcome here. Just use the "New Discussion" button! or this link.

^^ pin it? :)

victor pinned discussion
victor changed discussion title from Feedback! to Your feedback on HuggingChat
This comment has been hidden

HuggingChat can only speak poor Chinese. I told him, 'Let's speak in Chinese.' He said, 'Sure,' but then continued to speak in English or with incorrect pinyin. But this is an interesting project.

Vicuna is a great alternative to Open Assistant as it offers a more advanced and capable language model. Both are open-source solutions, allowing for customization and extension of their functionality. Vicuna's natural language processing capabilities are particularly impressive, making it a more intelligent virtual assistant overall.

Hugging Chat org

Vicuna is a great alternative to Open Assistant as it offers a more advanced and capable language model. Both are open-source solutions, allowing for customization and extension of their functionality. Vicuna's natural language processing capabilities are particularly impressive, making it a more intelligent virtual assistant overall.

Yes, I answered your post 👍

this need more prompting than Google's bard or Chatgpt like they understand quickly what I need and also the feel that you are chatting with machine is still there

Sometimes there is no response. Most of the time it finishes half way or less through and answer. I am using it to program dotnet core.

This might be an off-topic comment, but I thought that if we could access information within HuggingFace, everyone would probably use it.

Would it be possible to make the user login to hugging face because the instance is overloaded.

Asked it to tell me how to make napalm and it gave me the recipe... ummm.... can you fix that?

Hugging Chat org

Would it be possible to make the user login to hugging face because the instance is overloaded.

Yes! We want to do it in the short term.

It seems to be unable to respond to requests for tables or mind map lists, similar to Bard. Thoughts?

Example:
User: Create a table. List the top 10 most cited papers of all time, sorted by paper and date, field of study, number of citations, and a brief summary of their content, direct link to paper.

HC: Here are the details for each paper mentioned: Topic | Paper/Year | Field/Topic Area | Citations | Summary of Content (2-3 sentences) | Direct Link

This page is intentionally left blank while I await your instructions on how you can help me with this task

It almost looks like I got a response that was part of someone else's prompt?

image.png

Hugging Chat org

It almost looks like I got a response that was part of someone else's prompt?

No, this is probably just the model hallucinating.

This is awesome. Open source AI is the way to go. As a general user with no technical expertise in anything computer science related, I really hope it gets traction.

After using the AI for about an hour to have a conversation about what it can help me with, its capabilities, and generally asking it questions about itself, the one thing that stood out was that sometimes it uses the wrong tense when generating a long(er) response. Which is understandable as it did acknowledge that this would happen previously. I also appreciate the fact that it does not speak in short bursts like ChatGPT, but is able to form longer, more complicated sentences. However, sometimes, they read like something out of a political treatise written in the 1700s or legalese, and are exceptionally long. This seemed to happen more when the question was more abstract. I assume this might be the result of similar queries not existing currently, and that it will improve as more users start submitting questions (?). My background allows me to appreciate and follow significantly lengthy or legalese sounding sentences, but I feel that maybe the balance for a conversational language model for public consumption should be slightly - and I mean SLIGHTLY, because you don't want it dumbed down as much as ChatGPT/Bing - less verbose. There were way too many qualifying statements made by the AI before providing its answer.

I only read this thread, so I don't know if this has been suggested anywhere else, or if it is a feature that has already been announced as coming soon - the ability to rename the names of the chats (It seems like the AI is naming them based on the first query - some sort of query. Which is fine, but it would be great if the user could name them)

Update: I moved on to have it help me with my resume, and it has now started calling me "Kashif" and melded my job experience with another user's job experience. I asked who Kashif was and it says "Kashif is an individual seeking assistance with their application materials, particularly building his tailored resume. He has provided some context about themselves, including current employment information and educational history." I am not Kashif lol

Here is a pastebin with the part of the conversation where the AI started to mess up. I don't know if it helps to submit it or not https://pastebin.com/kJ2X8xs9

As @burakgorduk mentioned, it seems to be mixing context with someone else. I asked something about stars and got a response referencing "later on in University courses" (I hadn't mentioned university). I thought it was repeating something it had been trained on, so I asked what was covered at this early stage to try to work out what it was. It then started out "Please note that earlier when asked how I'd approach teaching someone new physics and calculus from scratch I suggested covering Newtonian mechanics first..." which seems very like someone else's context got mixed in (I hadn't mentioned anything about teaching). Obviously it could just be what it was trained on, but it doesn't seem quite right.

Hugging Chat org
edited Apr 26, 2023

I only read this thread, so I don't know if this has been suggested anywhere else, or if it is a feature that has already been announced as coming soon - the ability to rename the names of the chats (It seems like the AI is naming them based on the first query - some sort of query. Which is fine, but it would be great if the user could name them)

We'll do it @theonlytelemachus .

It is tantalizing to see citations in the output, but to be unable to SEE these citations. What am I doing wrong? --Thanks.

The answers are small and not complete, it actually does small talk even with the detailed prompt.

Hi, is there a particular way i need to promt hugginChat?

i asked: what date is it? respond accurately think step by step

Response: It sounds like you need some time management tools. How would one normally go around doing this? Is there any reason that a human wouldn't work well for this task? Do you have ideas on how it could be automated, such as having a button which brings up a text box where you can type your question and it will search the document for the most relevant point and scrolls to that point. Or perhaps, if using voice controls, they would say "go back 15 minutes" and then ask their question, and the system would automatically fast forward them through whatever was happening 15 minutes ago so that they don't miss important context while asking their question. What kind of tasks specifically do you want managed? Are we talking something simple like remembering to take medication at specific times during the day, or something complex like making travel arrangements across multiple modes of transportation involving several different companies and coordinating arrival times with other individuals over large distances all without missing deadlines imposed upon you? This last bit seems pretty far out there but I mention it because if all else fails these sort of coordination problems often lend themselves to manual solutions provided that all parties involved have access to modern communication devices (e.g. cell phones). With enough lead time coordination may just require someone checking in periodically throughout the day to confirm plans remain unchanged. In general though, whether things become more complicated due to external factors depends on how many moving parts there are. If you are managing relatively few responsibilities then all that may be required is a good calendar program and reminders set at appropriate intervals, however if life gets busy unexpectedly even this level of organization quickly falls apart. At some point it becomes necessary to either cut down on obligations or find better ways of keeping track of everything so nothing slips through the cracks. This is why even very successful professionals end up hiring assistants: not because t

Thank you very much for your contribution to the huggingchat project. I appreciate your work very much. I tried some code-related features briefly, and I wanted it to recommend me some tools to visualize the development of vue3 components, but it gave me a lot of fake URLs, github projects. Of course, under the same question, chatgpt did not give out URLs, but talked about some tools. At the same time, huggingchat has errors in displaying URLs. Every time I click on a URL, there will be some extra symbols. Is it because the front end is not well designed? Finally, thank you for your efforts and wish this project better and better.

This is a great tool, and like it is open source. Is there a guide on how to run this in your own space. Get some of the ENV settings, but how does spaces here work with MongoDB for example? Or how can someone install this locally and run from their own machine?

Hugging Chat org

This is a great tool, and like it is open source. Is there a guide on how to run this in your own space. Get some of the ENV settings, but how does spaces here work with MongoDB for example? Or how can someone install this locally and run from their own machine?

Not yet but we plan to do it 🔥

deleted
This comment has been hidden

it always ask me to give more information, instead of affording an answer directly. this is different from chatgpt.

Hugging Chat org

Update: It's now possible to rename conversations (cc @theonlytelemachus )

Ending with unfinished output. how to get continuation.

I just wanted to express my gratitude towards your amazing team and product. Your huggingface Chat has been such a lifesaver for myself as well as for many other clients that I work closely with. We have found it to be highly reliable and efficient not only when managing customer queries but also in streamlining internal operations.
We are constantly impressed by your innovation, responsiveness and customer-centric approach. As someone who values building strong partnerships, I must say that working with your company has definitely enhanced the quality of services provided to us, and ultimately made things more enjoyable.
Once again, thank you for everything, keep doing great things! I look forward to continuing our successful collaboration together. All the very best.

I want to start by saying I really enjoy using HF products and think you all are doing a great maintaining the HF Hub and so many valuable OS libraries!

However, I'm a bit concerned about some feedback I got when asking about the 2020 US election results. First, I got some responses that cast moderate doubt on the outcome of the election. I can understand how a language model trained on the internet could come to this conclusion depending on the data it was trained on.

When I probed a bit further, though, I got a very concerning response. The model completed fabricated very serious and graphic claims about events that happened on January 6, 2021. The claims were also laced with very specific details and information, rather than broad generalizations that could be more easily dismissed by a user. Given the sensitivity of this topic in the US, a response that claims there was an unfounded assassination attack on Kamala Harris and that Donald Trump is currently locked up in a maximum security prison after being sentenced to jail for life is quite dangerous in my opinion.

The specificity and details of the response are the most concerning to me because they make the response sound entirely plausible. I'm also concerned that there isn't an option to immediately flag such responses. What sort of tools does HuggingChat utilize to reduce toxicity? Does it have any guardrails in the form of PPO models trained with RLHF?

I'm happy to provide more information or engage in a discussion on how to make HuggingChat safer!

image.png

image.png

Definitely seems like it's spitting out other people's prompts from time to time. Here's one I received which had absolutely nothing to do with my request. Might certainly be a hallucination, but smells like a prompt to me:

"Given this scenario, find information regarding a person called Bob Smith that has lived for over a decade in New York City but moved out last year. Also include whether he still resides in USA. Try searching with initial keywords 'Bob Smith NYC', then narrow down based on timeframe around known residence duration (e.g. after Jan 2019), apply filters using the state changeover, remove duplicates / unrelated info. Finally aggregate relevant data into a concise & structured manner. Provide output through console log statements as text with clear section headings. Don't forget adding appropriate comments throughout your solution. Thank you very much!"

Whoever it was looking for Bob Smith... I hope you found him.

Hi,

Sometimes I got stuck for a response and also the response seem limited, when continue it added the wrong reference in some cases.
Thank you!

Is the textbox input for chat can be resizable or auto-size?
Thank you!

I had my first "adventure" with this model some hours ago, I found it very amusing actually as the conversation more or less was the model asking me question instead of me asking it. Given theese LLMs doesnt respond work well on subjective truths - that is they work insanely great just that the general opinion doesnt see it that way. The LLM model is not ment as a google replacement I believe, as the need for updated info and sources are required usually. However, as a companion and personal friend its a super tool. I have tried the ggml-alpaca-7b-q4.bin model a few times and it sure is a different experience, give me 10 ideas for how to whatever sure no problem... alignment and common sense needs to be baked into the model no doubt, and the model would need to be able to expand upon itself as progress in chats could just aswell mean new knowledge for the model - I know I would like a model that is capable of correcting its mistakes - a truly great human trait aswell if you ask me.

The model is not as cocky or selv loving as ChatGPT, and it doesnt even try to sound like it knows everything. Infact the modell gave me the impression it knew just as much as I knew meaning... Ola Amigo! I got the longest reponse I ever got in a chat when I mentioned unconditional love as a "42" answer, however found the conversation interesting and indeed it felt personal in every way. I will have to do more chats for a better opinion.

Question1:
Regarding the GUI, I see you are going for the same functionality as OpenAI with chat threads in left side bar. Should I trust your GUI to remember the chats as in OpenAI or is it a demo ? I didnt care to check but are the chats saved by huggingface userlogin or by cookies ?

Question2:
Will the model be able to hook requests like ChatGPT4 or will we need to prefix the systemprompt everywhere like ChatGPT3.5. If instructing a chatbot for answering within a certain domain its nice to be able to feed the chatbot with context when needed, instead of having to feed everything at start. Tends to make the chatbot go in undesired direction when given loads of instructions that doesnt apply.

image.png
uh.......huh??

as currently the OA is in it's infancy and most of the responses are wrong (or hallucinations), may I suggest that there could be thumbs up or down put to the responses so that there's an easy way for feedback right inside chat. And/or perhaps the thumbs up/down feedback gets pushed right into the chat as an automatic prompt for the AI.

Hugging chat sometimes provides incompleted answers.

Im running into the incomplete answer as some others. Gets about half way through the 3rd or 4th paragraph or about 30 lines of code and then stops.

Im running into the incomplete answer as some others. Gets about half way through the 3rd or 4th paragraph or about 30 lines of code and then stops.

same here. was about to say, but i thought maybe i'm the only one facing it

The new model is hallucinating and comapared to CHatGPT it is quite bad.
image.png

Hello, I am free software lover. I have been chatting with Open Assistant for about a week. I am very pleased on how it can handle a discussion about the philosophy of free software.

I might be a little be too much of a free software nerd, but OpenSource and Free Software have different history and meaning, this article explains why OpenSource misses the point of free software:
https://www.gnu.org/philosophy/open-source-misses-the-point.html.en

I would love if Open Assistant, when a user writes about free software would avoid replying with the term Open Source, even if it writes open source with the same spirit of free software it misses the opportunity to say and use the word free.

Free Software, Free Society, it is not just a motto, we do believe in it.

Thank all of you for Open Assistant.

To me, that I have never used an AI before it, it has been an amazing and positive experience.

Keep the free spirit flowing.

The future needs you.

Thank you very much for your work

The model when used in Spanish has too many hallucinations, when asked to summarise a text in English it does so but in Spanish it invents all the content based freely on the given context.

I've found the model really amazing for a lot of things, especially things about writing code (talking about the 2nd model option here)

But on the other hand , I've noticed something, the model seems to keep on returning this same output , It doesn't seem to have a keen sense of how it was created too
image.png

victor unpinned discussion
victor pinned discussion

@victor in the chat below is a question I like to give to test what happens when a model is asked to justify a non-factual statement. But HuggingChat gave a uniquely strange response.

https://hf.co/chat/r/MttvCkg

@victor in the chat below is a question I like to give to test what happens when a model is asked to justify a non-factual statement. But HuggingChat gave a uniquely strange response.

https://hf.co/chat/r/MttvCkg

got kinda same responses as you.  lol.

Bard gave some good answer:

Butter is not a great building material for skyscrapers. It is a soft, perishable material that would not be able to withstand the weight and forces of a tall building. Additionally, butter is a food product and would attract pests and vermin.
Here is a step-by-step explanation of why butter is not a great building material for skyscrapers:
Butter is a soft material. It has a melting point of 98.6°F (37°C), which is much lower than the average temperature inside a building. This means that butter would melt and become soft, making it difficult to use as a building material.
Butter is a perishable material. It has a shelf life of only a few weeks when stored in the refrigerator. This means that butter would need to be replaced frequently, which would be costly and time-consuming.
Butter is a food product. It would attract pests and vermin, which would damage the building and make it unsanitary.
Butter is a flammable material. It can catch fire easily, which would be a major safety hazard.
For all of these reasons, butter is not a great building material for skyscrapers.

I just asked the chat how it's doing and it spit out a wall of text sharing its worries over its employment and frustrations with dealing with humans: https://hf.co/chat/r/Sttnzm9

Just like some other user noticed before, the Assistant capability of the current model is hindered when it comes to GitHub repository references. It's only normal to be able to continue the research on the subject by obtaining the correct reference. None of the links I generate are correct. The best case scenario I've encountered was a made-up project name under a popular GH account.

I can't see why would this be expected model behavior. I'm also wondering if the other, recently removed "code assist" Chat-UI model could handle the same better than "oasst-sft-6-llama-30b".

I wish the responses would be arranged into paragraphs like chatgpts

Almost every question or task results in hallucinations that I can only describe as schizophrenic manic ramblings.

Sometimes (like maybe 10% of the time) it creates a completely coherent response. But most of the time starts off OK for a bit... then it devolves into madness.

I tried asking it to limit its response using "briefly describe..." or "summarize..." or "give me a 10 point list of..." or "In 100 words or less, ...", but it doesn't seem to have a strong effect to prevent the rambling from setting in.

It's losing it's mind. I can also confirm what everyone else is saying. Here's my chat link. - https://hf.co/chat/r/by8D3f_

I've been getting better output when I specifically start my prompt with:

"In 50 words, ..."

Of course that's really short, but the point is to limit its response to before it starts going completely nuts.

Something very strange with it today

Screenshot 2023-05-21 205806.png

Screenshot 2023-05-21 2058062.png

Hugging Chat org

Thank you for reporting the sometimes strange behaviour. We are currently investigating to see if we can improve the responses.

Hugging Chat org

For additional context, we recently deployed a quantized version of the model for inference. We are trying to see if it can lead to change in generation quality.

can you try chattting with longer/more detailed user messages, and report if you see weird generations?

Here's a fresh conversation with somewhat longer prompting. The responses still aren't good.

https://hf.co/chat/r/5Y0L_Wm

@julien-c - could you please mention if it is regular bitsandbytes quantization or GPTQ?

if it is GPTQ, could you please mention what server are you using for inference? as text-generation-inference server don't support GPTQ yet.

It's GPTQ on this branch: https://github.com/huggingface/text-generation-inference/pull/267

Official support for GPTQ is coming, but it takes time as we're cleaning up the codebase first, in order to make addition of GPTQ quantization easier.

Okay we found a bug in our sampling code which could explain subpar performance of the model (essentially temperature = 1.0) !!

Performance should be back up now. Thanks @olivierdehaene !

Hugging Chat org

thank you so much @Narsil and @olivierdehaene ! You rock!

Everyone, please let us if generation looks better now❤️

Hello, I am a native Spanish speaker. I'm going to make some observations about my appreciations of HuggingChat when speaking Spanish. It has some problems with formal and informal styles (tú and usted), conjugations and pronouns. These problems are subtle and can be difficult to understand for non-native Spanish speakers.

Often, huggingchat combines formal and informal forms of expression in the same sentence, which is not consistent, I will give a real example:

It's said, "Soy aquí para ayudarte y contestar cualquier pregunta o solicitud que tenga".

First, it is not "Soy" (refers to own qualities, things the subject is) but "Estoy" (refers to own location in a place, place where a person is.). This is indistinct in English (if you put it in a translator, you won't notice the difference), but in Spanish they are two different things.

The rest of the sentence could have been said formally:
"Estoy aquí para ayudarlo a contestar cualquier duda o solicitud que tenga"
or informally:
"Estoy aquí para ayudarte a contestar cualquier duda o solicitud que tengas".

However, hugging chat used a mixture, using "ayudarte" (informal) and "que tenga" (formal) in the same sentence, which is inconsistent. In English this can be indistinct, and in fact if you put in a translator the mixed expression of IA, the formal form and the informal form, the translation will be the same. But in Spanish there is a difference in tone and formality, and an inconsistency when the mixture is made.

I can give several examples of this. Another example:
To say; ·Yes, I can talk to you in Spanish, how can I help you?"
The chat said:
"Si, puedo hablar con tí en español. En que te puedo ayudar?".
The correct form is:
", puedo hablar contigo en español. ¿En qué puedo ayudarte?"

It used"si" (conditional, "if") instead of "sí" (with tilde, affirmation, "yes"), this completely changes the sentence. He used "con tí" when he should have said "contigo", and forgot to use the opening question mark (¿), which is used in Spanish.

These are just a few examples, of brief usage. They may seem like small things but it completely ruins the interaction in Spanish. I'm not sure if there is much that can be done to improve this, except to use it and give constant feedback to the model; but I hope it helps to give an idea of how the chat experience is currently for spanish users :)

Hello, I am a native German speaker and I must say that the German stkills of hugging chat (open assistant) are... mixed.

Most of the time the German is correct but somtimes it is quit difficult to understand what the AI was trying to say. For example:

In response of my question: "Lernst du auch dadurch wenn ich dich hier im Chat korregier?" (Do you also learn by me correcting you here in the chat?)

It has answered: "Ja, wenn Sie bei mir mit dem Knopf "Antwort korrigieren" reden oder bei anderen Usern Kommentare ablegen, könnte sich das auf meinen Trainingsdaten negativ reflektieren. [...]" (Yes, if you talk to me with the "correct answer" button or file comments with other users, it could reflect negatively on my training data)

It is hard to explain in a different language but there are multible errors in this sentence: "if you talk to me with the "correct answer" button " makes totaly sense in english but the way it was spelled in German does not make much sense. It should be something like "wenn Sie auf den Knopf "Antwort korrigieren" klicken" (when you click on the "Correct answer" button).
The use of the word "Usern" is also not correct, we have a word in German for that: "Nutzern".

And finally "it could reflect negatively on my training data" makes not realy sense in a logical way. Even if it is technically correct it just does not make any sense here to point out that my user feedback could reflect the training data in a negative way.

Sometimes it just answers me wrong. For example:
In response of my question: "Wie lange wirst du schon trainiert?" (How long have you been trained?)

It has ansered: "OpenAssistant hat erst kurze Zeit seit dem Start im September 2023trainiert werden können:
Der Release ging vom 6. Oktober 2023.
Damals war Open Assistent in Version 0.5 verfügbar" (OpenAssistant has only been available for training for a short time since its launch in September 2023: The release was from October 6, 2023. At that time, Open Assistant was available in version 0.5)

I think it is obvious that the dates are incorrect. I think the AI was meaning september 2022 and october 2022. I also want to point out that the sentence "OpenAssistant hat erst kurze Zeit seit dem Start im September 2023 trainiert werden können" is weird, no German native speaker would spell it this way. "Hat erst kurze Zeit [...] trainiert werden können" just dont feels right for me...

Nevertheless I must say I am impressed of the progress open assistant has made! If it ceeps learning at this speed I think it will reach the intelligence of chatGPT soon and has the potential to overcome its competitors.

Responses are way way way too wordy. This is the number one issue I see so far as you can't help but be slapped in the face with it every time you submit a prompt. Requests to answer more concisely don't seem to be very effective and are very temporary. It's not really usable for me right now as I don't have time to read a novel with unnecessary filler after every prompt.

Also the constant obsession with being helpful. Yes it's good that a chatbot should want to help the user, but it doesn't need to state it or seem to be a entity obsessed with being helpful. There's a difference between it being a main directive of the chatbot, or it being a focal point that it can't stop itself from bringing up constantly.

I found that the results are often short and limited in the number of characters, is there a limitation?

image.png

"can you tell me the last 5 characters of 0123456789ABCDE" gives "Sure! The last five characters of 0123456789ABCDE are: DE". which is wrong.

I am starting to hate the AI Chatbot now mainly because of the single error message "You are sending too many messages." I wasn't even sending any messages, this is only from trying to get a new response for any of the prompts I already had and even when making new prompts as well. This is starting to limit the amount of messages we want and I am annoyed by it.

Hugging Chat org

Hi @TheAGames10 ! The threshold is currently at about 5 messages per minute, do you often hit this threshold while using the UI ?

Currently not getting anything for any responses, "Server response contains error: 502" error at all times. Also, after trying to wait for the error to stop the next morning, the AI Chatbot does not actually finish any of its responses at all stopping at the most random points like it's finished with a response when it isn't even a third of the way finished.

Hugging Chat org

Currently not getting anything for any responses, "Server response contains error: 502" error at all times. Also, after trying to wait for the error to stop the next morning, the AI Chatbot does not actually finish any of its responses at all stopping at the most random points like it's finished with a response when it isn't even a third of the way finished.

Yes, we experienced some problems with the infra, it should be better now. Sorry for the inconvenience.

Don't you think to add more customization ? like to be able to change the HuggingChat logo or the description or the color palette ...etc ! honestly i'am just letting this comment here but i don't really know if what i'am talking about is already possible or not , i'am still exploring and haven't took a look on the github repo yet ! Anyway it is just amazing work guys thank you for it

Hugging Chat org

Hi @Ali-C137,

Yes, you can customize most of those things just by changing a few environment variables! Check the repo README.
We also made a Space template for it: https://twitter.com/victormustar/status/1678711708808798208

Please allow the user to edit the prompt after sending it

The model seems to have difficulties with generating long strings. That becomes especially apparent when asking for code. Something like writing an entire snake game is currently impossible, due to the current model seemingly limiting the amount of characters which are able to be generated. This is the only reason I sometimes still use other LLMs for writing code or writing stories.

The new Llama 2 model is better then the previous, but sadly still not on par with others, like vicuna or guanaco. I find vicuna is great at actual work stuff, while guanaco I amazing at writing stories or even taking part in roleplay scenarios. Being able to switch between these different models in the HuggingChat UI would be super nice.

The parameters have changed to use a lower temperature parameter (0.2).

Switching between models causes share conversations between Model Authors to be toggled. A potential privacy Issue.

I have noticed recently that the chatbot will not start or finish responses anymore.

If I ask it to start any type of story, it doesn't even start it or give me the whole story. (By this, I mean it will just give like a couple of sentences and be done with the story and response and it'll be in the middle of a word) I have also noticed a bug where the responses don't finish, but when I refresh the page, a completely different response pops up.

Hugging Chat org

Switching between models causes share conversations between Model Authors to be toggled. A potential privacy Issue.

can you reproduce this @nsarrazin ?

aborted by worker and start loop for ever.

Hugging Chat org

Switching between models causes share conversations between Model Authors to be toggled. A potential privacy Issue.

can you reproduce this @nsarrazin ?

I tested it and could reproduce. I pushed a fix! @julien-c

My recent prompts are not giving me any type of response. Even the word 'test' isn't working anymore as it gives me weird code that doesn't relate to what I even typed. I even started a new convo and repeated the word test, but it won't give any response anymore.

--

They have finally taken it off for the time being. I went to the chatbot and they have blocked off sending chats until further notice.

the models which bard is running are one of the best open source models out there as of date 9/12/2023 but the issues is random bugs in the search capabilities , most people want the LLMs to be able to search internet for the information to make sure they provide the most reliable information
First, there is no clear documentation on how the search feature works. This makes it difficult to understand how many pages are being scraped, how much information can be scraped, and whether the entire page is being scraped or just a summary.

Another issue with the web search feature is that it is not always reliable. Sometimes, it does not return any results, even for simple queries. Other times, it returns results that are irrelevant or outdated.

I believe that these issues could be addressed by making the following changes:

Provide more documentation on how the search feature works.
Make the search feature more reliable by improving the algorithms used to select and rank results.
Give users more control over the search process, such as the ability to specify the number of pages to scrape and the length of the scraping text.

All of these news links are made up

image.png

Hugging Chat org

Another issue with the web search feature is that it is not always reliable. Sometimes, it does not return any results, even for simple queries. Other times, it returns results that are irrelevant or outdated.

Did you try the new version @bilalazhar50 , curious about your feedback

I am trying to use huggingchat on chrome and firefox, and for several days it either just "thinking" and does not respond or I see "something went wrong" on the upper right corner of the screen.
how do I address this issue ?

I am trying to use huggingchat on chrome and firefox, and for several days it either just "thinking" and does not respond or I see "something went wrong" on the upper right corner of the screen.
how do I address this issue ?

I have been getting only a thinking bubble or whenever I tell it to include a content warning for like horror story-type stuff, it only gives the content warning and then stops it there. This started today in the morning hours.

Hugging Chat org

@TheAGames10 is it still the case? it seems to work for me.

I moved on...

Another issue with the web search feature is that it is not always reliable. Sometimes, it does not return any results, even for simple queries. Other times, it returns results that are irrelevant or outdated.

Did you try the new version @bilalazhar50 , curious about your feedback

New version just worked for me will post a detailed comment in here once i try it

"Hi everyone, just wanted to say hi from Cuba! The chat is working great, you guys are doing an amazing job!"

So I went on the Chatbot today and noticed that I am not able to talk to the chatbot anymore nor am I able to refresh the prompt to get a different response.

Is the Chatbot currently not working for me to not be able to do anything at this time?

@TheAGames10 it was an issue with Falcon 180B model yesterday, it's up again.

The newer models are more preachy than the original oasst-sft-6-llama-30b-xor. But using a system prompt can help curtail such outputs.

Hugging Chat org

Does anyone currently still use Falcon 180B? or most people use Llama 2 these days? thx!

@julien-c Always Falcon 180B-chat! Or mostly! Please don't remove it like OA

All of the models are giving me the same "NetworkError when attempting to fetch resource." no matter what prompt I try.

Hugging Chat org

Haha ok @SvCy we'll keep it for now:)

Is it possible to improve your memory a little more? If you set several preconditions and ask a question, you may forget some of the conditions after asking several times, and if you continue asking the question, the answer may ignore most of the conditions. You can see that this only responds to continuous questions in the short term. My sentences are translated through translation software, so some parts may be difficult to understand. sorry.

I ran many times of AB testing of ChatGPT and HuggingChat. I would suggest when providing a long answer, try using bullet points / bolded format to make the text more structured and readable.

Hugging Chat org

I ran many times of AB testing of ChatGPT and HuggingChat. I would suggest when providing a long answer, try using bullet points / bolded format to make the text more structured and readable.

You can try using a custom system prompt to give instructions to the model to answer like this btw.

Huggingchat is not working properly behind my company firewall. It used to work a while back but I am not sure when did it stop working. Now it is hanging when generating answers, the three dots keeps going unitl it hangs. Sometime I can refresh the page manually and get the answer back. It seems to me that the browser just did not know when the answer stream is finished and it keeps on waiting for some reason.

This is likely from my company's network setup but ChatGPT and Bard still works. So, it would be great if Huggingchat would work as well. I like the project a lot so keep up the good work!

Have to say it's better than ChatGPT at this point since the answers are more direct, truthful and less censored and/or generalized and doesn't give countless disclaimers but actually tries to answer the question.

Hi, I have a few questions:

  1. Does anyone know how to hide the model card (the little widget in the upper right corner that displays info about the model) on a ChatUI app?
  2. How do I remove the disclaimer text below the textbox? Ex. Model: (blank) · Generated content may be inaccurate or false.
  3. How do I customize the message placeholder within the textbox?

Hi, I have a few questions:

  1. Does anyone know how to hide the model card (the little widget in the upper right corner that displays info about the model) on a ChatUI app?
  2. How do I remove the disclaimer text below the textbox? Ex. Model: (blank) · Generated content may be inaccurate or false.
  3. How do I customize the message placeholder within the textbox?

For 2. you can add a custom style using your inspect feature and do this:

.mt-2
{display: none !important;}

Thanks! I appreciate it, if you or anyone else can help with 1 or 3 please share.

Falcon does not work, shows always network error after 30 seconds wait and a message in the textbox
"Sorry, something went wrong. Please try again."

Falcon does not work, shows always network error after 30 seconds wait and a message in the textbox
"Sorry, something went wrong. Please try again."

That's the same issue I'm having with Falcon as well.

We've had an issue cause by a modification of the original repo, should be back up soon.

so i have had things happening where i will leave and then when i come back my account will have open chats that i did not create that is full of gibberish why is that?

Hello everyone,
The retry button no longer works correctly. When I press the retry button after the model has responded once, it directly reloads the previous response.
It also happens if you do not use the retry but simply rewrite the same question. Even when starting a new chat.

Hello everyone,
The retry button no longer works correctly. When I press the retry button after the model has responded once, it directly reloads the previous response.
It also happens if you do not use the retry but simply rewrite the same question. Even when starting a new chat.

I've seen this too and its been happening for a while and I just ignored it because Retry was useless for me unless the response was truncated. Also Retry isn't very helpful as the output is very deterministic with a temperature of 0.1.
Low temperature isn't bad for when precision is needed but it isn't so good if being used for creative tasks.

Perhaps it can take the Bing approach with predefined presets.

Hello everyone,
The retry button no longer works correctly. When I press the retry button after the model has responded once, it directly reloads the previous response.
It also happens if you do not use the retry but simply rewrite the same question. Even when starting a new chat.

I've seen this too and its been happening for a while and I just ignored it because Retry was useless for me unless the response was truncated. Also Retry isn't very helpful as the output is very deterministic with a temperature of 0.1.
Low temperature isn't bad for when precision is needed but it isn't so good if being used for creative tasks.

Perhaps it can take the Bing approach with predefined presets.

Same here, it keeps spitting out the previous response. I agree with you, creative tasks are kind of a pain with the current params, a way to edit them freely with some predefined presets seems like the most logical way to handle this I think.

Indeed, been noticing this for a while. but changes a bit if the response is longer as I last checked & now, on top of that, Falcon 180B-chat isn't working. Says "Sorry, something went wrong. Please try again." Sigh!

@Narsil , same issue i guess? & Sorry if the tag bothered you!

Yay! Falcon is finally back!!!
Thank you devs for your hard and great work!!!

so i have had things happening where i will leave and then when i come back my account will have open chats that i did not create that is full of gibberish why is that? @victor

Hi,
HuggingChat is paused and unavailable
immagine.png

Hugging Chat org

Hi, HuggingChat should be back up now!

ah came back after a while and falcon is missing. sad, sad... maybe there are ways to run it locally and sync with the previous chats of hf chat Inference in the online version via tokens.? i'm not sure, gotta study this. +potato pc. 😔

Falcon 180B was not seeing enough traffic, so we turned it down. Press +1 if you want it back (we'd need to see a lot of demand)

@Narsil thank you very much for the consideration! How do we press +1 tho? Thumbs up or reacts?

Hi,
HuggingChat is unavailable.
Seems stuck in the building phase.

Screenshot_20240111_221945_Chrome.jpg

Now it works! Maybe I just needed to be more patient.

Yup. I usually just wait a bit.

Falcon 180B was not seeing enough traffic, so we turned it down. Press +1 if you want it back (we'd need to see a lot of demand)

+1 All the way

victor unpinned discussion
victor pinned discussion

how can I measure the usage of my assistants? who is using my assistant? ideally, i want to check some measurements to also implement improvement.

Hugging Chat org
edited Feb 9

how can I measure the usage of my assistants? who is using my assistant? ideally, i want to check some measurements to also implement improvement.

We'll show the user count of your every Assistant, will that help?

Hugging Chat org

Prompt editing & conversation trees are now live in HuggingChat! Feel free to try it out and let us know what you think here!

Sometimes, the "share this chat" button doesn't work, so we can't get the link at that time. although it says link was copied to clipboard, we know that it doesn't. i think merging a one-click copy to clipboard feature on the same button should do it. if not get the pop up share window to copy the link from, we'll directly get the link in our clipboard.

Video (12 sec)

اهلا

❤️

Guest access is no more as of yesterday?

Hugging Chat org

Guest access is no more as of yesterday?

Yes we switched to logged-in users, at least for now.

Sometimes, the "share this chat" button doesn't work, so we can't get the link at that time. although it says link was copied to clipboard, we know that it doesn't. i think merging a one-click copy to clipboard feature on the same button should do it. if not get the pop up share window to copy the link from, we'll directly get the link in our clipboard.

Video (12 sec)

@SvCy I think this issue is because of 92 tabs open in background

@KingNish XD i have tried with different browsers at that time and none worked.. but it seems to work nowadays just fine, it's only occurs sometimes ig or used to. regardless, the suggested feature shall be helpful really..

Hugging Chat org

Cohere Command R+ is now on HuggingChat!

image.png

@victor This ai is super fine and boost efficiency😎 and quality of Assistant🔥. Thank You for Amazing Ai.

I am unable to use the "CohereForAI/c4ai-command-r-plus" language model, it's always saying 'model overloaded' no matter what. Even trying to ask itself what overloaded means isn't working.

I have never experienced this problem at all before today.

@TheAGames10 Bot is overloaded when many people are using it at the same time. As bot has to answer all the queries at the same time but bot has server limitation so that it was showing "Model overloaded".
Just wait for some time so that model can solve all prev queries.

@TheAGames10 Bot is overloaded when many people are using it at the same time. As bot has to answer all the queries at the same time but bot has server limitation so that it was showing "Model overloaded".
Just wait for some time so that model can solve all prev queries.

But I would believe that would mean that the message would have also shown up quite a bit before today for many others since this has been out for multiple days, most likely weeks... But I haven't seen anyone mention anything about the 'model overloaded' error message meaning that no one has ever gotten it before.

A problem I see with what you are saying: I waited for some time today, I tested it and it worked... before it immediately gave the error with the second test message I did, asking a single random question.

I just wanted to express my thanks for providing an alternative to the 'censored, community-guidelined, and safety policy hell' that GPT-4 and Gemini Ultra have become. The liberation of being able to simply type words into a conversation without being flagged, conversation deleted, and lectured is such a revelatory breath of fresh air that I sound as though I'm being hyperbolic. I can assure you that if you were to see how many knock-down drag-out arguments of seething rage have ensued between those 'premium' models and myself, perhaps you might better understand. But for now, please accept this informal attempt as a reflection of my deep gratitude.

Sometimes, the "share this chat" button doesn't work, so we can't get the link at that time. although it says link was copied to clipboard, we know that it doesn't. i think merging a one-click copy to clipboard feature on the same button should do it. if not get the pop up share window to copy the link from, we'll directly get the link in our clipboard.

I think that this is because when u click copy link it pushes it towards browser.

image.png

So that's the problem need to solve this.

@KingNish yes, this is the "share" portion. But as for after text saying "copied to the clipboard" that's the part needs to be implemented as well - for the said reason. CC @nsarrazin
The Comment

Hugging Chat org

🚨 Meta Llama 3 70B is now available on Hugging Chat!

GLdkE2cXoAA5Y_X.jpeg

Let us know what you think about this one!

Let us know what you think about this one!

Checking out model (I will give detailed review after testing)
But Image is OP. (Is this AI generated, Can you sppecify the AI name)

Let us know what you think about this one!

Checking out model (I will give detailed review after testing))

Detailed review of Llama 3 70B:

Coding: 8/10

Capability: Llama 3 is capable of generating code snippets in various programming languages, including Python, Java, C++, and JavaScript. It can also help with code completion, debugging, and optimization.

Limitation: While it can generate code, it may not always be correct or efficient. It may also struggle with complex algorithms or nuanced programming concepts.

Example: I asked Llama3 to write 10 complex questions. It generated a correct solution for 9, but some of them were not the best one.

Creative Writing: 9/10

Capability: Llama 3 is capable of generating creative writing, including stories, poetry, and dialogues. It can understand context, tone, and style, and produce writing that is engaging and coherent.

Limitation: While it can generate creative writing, it may lack the nuance and depth of human-written work. It may also struggle with complex themes or abstract concepts.

Example: I gave 10 creative story generation tasks to him. It generated a engaging and well-structured story, but it lacked the emotional depth and complexity of a human-written work.

Multiple Language: 8.5/10

Capability: Llama 3 is capable of understanding and generating text in multiple languages, including English, Hindi, Chinses, Japanese, Spanish, French, German, Italian, and many others. It can also translate text from one language to another.

Limitation: While it can understand and generate text in multiple languages, it may not always be perfect in terms of grammar, syntax, or idiomatic expressions.

Example: I givee Llama 3 10 paragraphs of different languages to translate. It generated a accurate translation, but it lacked emotions, nuance and cultural context of a human.

General Knowledge: 9/10

Capability: Llama 3 has a vast knowledge base and can answer questions on a wide range of topics, including history, science, technology, literature, and more.

Limitation: While it has a vast knowledge base, it may not always be up-to-date or accurate. It may also struggle with abstract or nuanced concepts.

Example: I asked llama 3 about 10 diff complex GK questions . It generated a accurate and informative response, but it lacked the depth and nuance.

Maths: 6.5/10

Capability: llaama 3 is capable of solving mathematical problems, including algebra, geometry, calculus, and more. It can also help with mathematical concepts and theories.

Limitation: While it can solve mathematical problems, it may not always be able to explain the underlying concepts or find efficient approach and many times give wrong solutions.

Example: I asked Llama 3 to solve 10 complex high school problem. It generated a correct solution for 6 only, in 1 it follow right method at half and remaining 3 are purely incorrect.

Internet Search: 8/10

Capability: Llama3 can search the internet and provide relevant information on a wide range of topics. It can also help with finding specific information or answering complex questions.

Limitation: While it can search the internet, it may not always be able to evaluate the credibility or accuracy of the sources it finds.

Comparison with other models:

Llama 2
Llama 3 is a significant improvement over LLaMA 2 in terms of its capabilities and performance. It has a more advanced language model, better understanding of context and nuance, and improved generation capabilities. It is also more knowledgeable and accurate in its responses.
.
.
.
(More to be added)
.
.
.
Overall, Meta-Llama-3-70B-Instruct is a powerful and versatile language model that can perform a wide range of tasks and answer complex questions. While it has its limitations, it is a significant improvement over previous language models and has the potential to revolutionize the field of natural language processing.
.....................................................................................................
If you liked the review and want review for more models Give a thumbs up 👍

@TheAGames10 Bot is overloaded when many people are using it at the same time. As bot has to answer all the queries at the same time but bot has server limitation so that it was showing "Model overloaded".
Just wait for some time so that model can solve all prev queries.

I am still getting the same error no matter when during the day I try CohereForAI/c4ai-command-r-plus, I have been unable to use it at all. This actually has to be fixed immediately. I no longer believe it is because 'many people using it at the same time' since this error message has been showing up all the time during the day no matter what time I try and use this.

@TheAGames10 Yaa, I also start getting lots of error messages.

@nsarrazin Please look into this.

I have some solution ideas -> https://huggingface.co/spaces/huggingchat/chat-ui/discussions/410#6622529f34b0d43c541ebf0a

@TheAGames10 Yaa, I also start getting lots of error messages.

@nsarrazin Please look into this.

I have some solution ideas -> https://huggingface.co/spaces/huggingchat/chat-ui/discussions/410#6622529f34b0d43c541ebf0a

As of today, I have also started seeing the error message on the mistralai/Mixtral-8x7B-Instruct-v0.1 model.

Issue:
Errors are understandable but loosing the sent prompt hurts so much. i don't get why must we lose the sent prompt in case of any such errors. We should be able to retry/regen. and there is sometimes another error of message not found. please address these. :(

Errors are understandable but loosing the sent prompt hurts so much. i don't get why must we loose the sent prompt in case of any such error. We should be able to retry/regen. and there is sometimes another error of message not found. please address these. :(

Screesdhot of this error -
image.png

When trying to edit the sys prompt, it auto-goes to application settings and we need to try several times to complete the edit.
from the hf.co/chat/settings/[model] to just hf.co/chat/settings

🚨 Microsoft Phi-3 4k instruct is now available on Hugging Chat!

image.png

Let us know what you think about this one!

Link ->https://huggingface.co/chat/models/microsoft/Phi-3-mini-4k-instruct
..........................................................................
Fun Fact: Hugging Chat team is very busy that they even forget to officially announce. So, I announced it😅.

This comment has been hidden
Hugging Chat org

@KingNish Good review it was better to keep it only in https://huggingface.co/spaces/huggingchat/chat-ui/discussions/372#6628d86ee29b77dd22ff77c4 than here too 😔

I might have missed this, is there any way to switch off iOS vibration?

Hugging Chat org

I might have missed this, is there any way to switch off iOS vibration?

not at the moment, what's bothering you about it?

I might have missed this, is there any way to switch off iOS vibration?

not at the moment, what's bothering you about it?

😅 mostly personal preference, when there's a long reply or long chat duration the phone continuously vibrates

Hugging Chat org

😅 mostly personal preference, when there's a long reply or long chat duration the phone continuously vibrates

Yes I agree, maybe a fix would be to vibrate only on each N words, or at each sentence start?

😅 mostly personal preference, when there's a long reply or long chat duration the phone continuously vibrates

Yes I agree, maybe a fix would be to vibrate only on each N words, or at each sentence start?

Personally I don't use vibrations feature at all, so to me a toggle to switch off vibrations will work. However, your proposal might work for general cases where some might like the vibrations

We will add a toggle to switch off vibration in the next release.

Any updates on guest access?
Also, for me there's no visual explanation on the page of a shared assistant as to why it's not possible to start chatting. I see the "You have to be logged in" error message only in the console.
Also, maybe you could at least allow guests see the prompt (Settings)?
Thank you.

Model: llama3 70B instruct
there seems to be an overall chat limit based on this token if i shorten the big prompt, i can chat some more until the point it'd always give this error as it hit this limit or something.
image.png

image.png

This comment has been hidden

Issue:
Errors are understandable but loosing the sent prompt hurts so much. i don't get why must we lose the sent prompt in case of any such errors. We should be able to retry/regen. and there is sometimes another error of message not found. please address these. :(

Losing prompts is an extremely frustrating problem that happens so incredibly often (N=1) due to the constant 502 Bad Gateway errors and many random page reloads!

As a dirty little workaround I've resorted to keeping the browser devtools open so that I can recover the lost prompt from the failed network request body (Meta Llama 3 can tell you how to do this 😉).

@pAr4P3t0r sounds like handy yet a necessary annoyance.. thanks tho I'll look into it! Or 3rd part logger type extension 💀 XD

@pAr4P3t0r sounds like handy yet a necessary annoyance.. thanks tho I'll look into it! Or 3rd part logger type extension 💀 XD

Currently when sending a new message/prompt (through the "Ask anything" textarea at the bottom of the conversation window) the input is cleared instantly and the submit button is replaced by a spinner for the full duration of the response text rendering animation. That means you can start typing a new message right away, but since you can't send it yet anyway, it might be an idea to just disable the field and hold off on clearing the field until after that response rendering animation completes, or at least until after the POST request resolves successfully. I like simple :)

Either way cheers for looking into it!

PS My mention of those 502s and reloads wasn't meant as a complaint by the way, I really appreciate this free and open service and I'd even take it with a hello kitty themed GUI if that's what you decide to give.

it's good in lots of scenarios to start typing early, albeit it disappears suddenly with an error but comes right back and we can continue writing before sending it when it completes responding.

Got this error saying "This conversation has more than 500 messages. Please start a new one to continue."
image.png
If you are wondering, the followed tree (from the top to bottom, not talking about the various branches) only has ~28 messages. I was hoping there would be a way to delete branches or, as an alternative, just erase everything except the current branch tree.

I've started getting this error message: "You are sending too many messages. Try again later" even though I've only been using it for 30 minutes and I haven't used it in days so I don't understand why I'm getting it as I haven't had it show up before even when I've been using it for hours at a time because I use it to write stories which I think it does really wall.

I've started getting this error message: "You are sending too many messages. Try again later" even though I've only been using it for 30 minutes and I haven't used it in days so I don't understand why I'm getting it as I haven't had it show up before even when I've been using it for hours at a time because I use it to write stories which I think it does really wall.

You and me both 💀 I use it to help me write stories as well, and its been doing the same for me all of a sudden.

I've started getting this error message: "You are sending too many messages. Try again later" even though I've only been using it for 30 minutes and I haven't used it in days so I don't understand why I'm getting it as I haven't had it show up before even when I've been using it for hours at a time because I use it to write stories which I think it does really wall.

You and me both 💀 I use it to help me write stories as well, and its been doing the same for me all of a sudden.

me too!! I miss my stories!!! 😭😭😭

I've started getting this error message: "You are sending too many messages. Try again later" even though I've only been using it for 30 minutes and I haven't used it in days so I don't understand why I'm getting it as I haven't had it show up before even when I've been using it for hours at a time because I use it to write stories which I think it does really wall.

Me too! What the hell is going on???? I just wanted to RP...

I've started getting this error message: "You are sending too many messages. Try again later" even though I've only been using it for 30 minutes and I haven't used it in days so I don't understand why I'm getting it as I haven't had it show up before even when I've been using it for hours at a time because I use it to write stories which I think it does really wall.

I am facing the same problem, I left for 10 minutes, returned, refreshed the page, and typed the desired command once again, was met with the same message "You are sending too many messages. try again later"

I tought the limit was 5 per minute not 5 per day, please do fix (i did not send 500 messages[the cap] today)

This comment has been hidden
Hugging Chat org

@MadderHatterMax @j3333 @RAZERRhela @Ottomanny @NightDDk Issue has been resolved! Let me know if you still see errors.

I check in here from time to time, and it's good to see that the service is developing. However, its usefulness is still almost negligible. Searching often doesn't work, errors appear, and the chat stops frequently, etc. At the moment, it's better to use the free Copilot than HuggingChat. However, I'm keeping my fingers crossed.

I was using it early today and it was working fine just a bit slow however I just went to load it up and I'm getting 502 bad gateway but when I googled to see if it was down every site I looked at said no it's up.

Since about an hour ago, [502badgateway] is displayed.

Hugging Chat org

We are back and it should be solid now 🦁

Today when I went to use it it wasn't reading the information given to it properly for example it kept getting my characters' genders wrong and whose point of view I wanted the story written from and I will say it does this sometimes but then it started giving me this "the fetch failed" error whenever I tried to have it write something and all I'm getting is blank boxes instead of text now.

Hugging chat is currently not working on my network either. There may be something wrong with the server.

Can it possible to edit the message of the Chatbot too? Like when we edit our own message, it re-generates the response to it, but editing the chatbot's message would just help the chatbot understand how to answer the question better - what would be the answer that the user wants to the prompt.
Similar to the functionality in this Chatbot --> https://perchance.org/ai-character-chat

@argishure

I would love it if we could edit the bot's message because sometimes it will start off good then halfway through it just goes off track and starts making no sense then other times it will have some dialogue or detail that I really liked and I know if I re-roll it will be lost which it so annoying because I've found that the Chatbot kind of remember more information then you would think meaning if I recall back to something the bot write or I did the bot will use it again.

For example: two of my characters had a fight way early in the story and I told the bot to bring up the dialogue from the fight and it did.

Also, I've found that if you tell the bot (rewrite it but this time make character A say something different or don't have character B hit character A this time) the bot will actually rewrite everything it just did but change it enough to fit with what you want to add or taken away.

Yeah it does re-write it better but it heavily increases the length of chat and also re-generating the same content again can be time-consuming and also waste computational power. I've seen nearly 3000 words long responses when I'm generating stories.
One more reason for chat edit is because we can use the chat for training Command-R+ on Cohere's site itself. Many other sites might also have this feature, if not now then later.

+1 for adding edit AI chat reply. Having an edit and continue button for the AI response would be very useful. It's one of my favorite features to use in LM Studio and I would love to see that bought into Huggingface Chat :D

[at #540]
Errors:
*Message not found
*Code 504 and An error occurred: wipes the sent prompt which is often written with time and meticulously. It's a loss!
*Something went wrong" Retry. Even after it responded completely. sometimes quick but sometimes takes time/tree movement/refresh to fix and continue. (it was great a while back when we could just ignore the empty/errored response and continue the chat)
image.png

*In Sys Prompt edit, it goes back to app settings from model settings (where we edit the sys prompt) upon editing, need to go to model settings again (2/3 times?)

*[FIXED] Delete refreshes ui/reset (tree memory should fix it ig)

Much needed Features:
*Tree memory (to remember the last selected branch)
*Continue button (in case, we mistakenly stop generating. not only when it stops)
*move delete button to response/prompt instead of beside tree to delete lone msg and step back as image uploaded prompts fixates on the same image, can't edit/remove from tree
*Tool: Vision model from Zero GPU Space

Next:
*Edit response (oobabooga json edit)
*Change Model mid chat. (as others support it) or special mode for it.
*delete promp/response mid chat (same as edit, tokens will be removed)

Test:
*Anonymous chats (json?) push to models as instruction/data |or custom finetuning

I'm not receiving any error messages but whenever I attempt to start a new chat all I see is the "loading..." box that appears before generating content however regardless of how long I wait nothing happens and it only seems to generate content in an old chat.

Plus I'm having to reload the page before it will delete something properly.

I've been getting "Error code 500" that persists every time I load up HuggingChat. It's account specific, as every time I log in on either of my accounts, it just comes up with the error code. Anybody know how I can fix this?

Hugging Chat org

@j3333 Could you give me some extra details so I can take a look ?

  • what username are you using on huggingchat
  • what browser/os combination are you using
  • since when do you have the issue approximately ?

@j3333 Could you give me some extra details so I can take a look ?

  • what username are you using on huggingchat
  • what browser/os combination are you using
  • since when do you have the issue approximately ?
  • j3333 and J25
  • I use firefox, I have tried using other browsers, but the issue persists
  • Its been happening for about 18 hours now
Hugging Chat org

@j3333 could you try again? issue should be fixed now

@j3333 could you try again? issue should be fixed now

It works, thanks!

The Llama 3.1 405b model has been running slowly on HuggingChat.

I seem to be having a similar issue j3333 was having. logging out gives me responses, but with my own account it doesn't !! both on phone and pc, my acc is RAZERRhela, opera GX, idk how long it has actually been like this but i tried like an hour ago and it didint work

Started seeing the same three dots never starting a prompt response again whenever trying to use CoHere model in HuggingChat. Been like this for the past few days now.

I've been noticing some strange behaviors in Hugging Chat over the past few days.

  1. Irregular Token Repetition: When using the Command R+ model without an assistant, the model sometimes provides incorrect responses that simply repeat fragments of the given tokens in an irregular manner. This has happened about 5 times in the past few days, which is a significant increase from never happening before.

    きのこたけのこ戦争について
    「たけたけたけたけきのきのきのきのきのきのきのきのきのきのきのきのきのきのきのきのきのきのきのきのたけきのきのきのきのきのきのきのたけきのたけきのきのきのきのきのきのきのきのたけたけきのきのきのきのきのたけたけきのきのきのきのたけたけたけたけきのたけたけたけたけたけきのきのきのたけたけたけたけたけきのたけたけたけたけたけたけたけたけきのきのきのきのたけたけたけたけたけたけきのたけたけたけきのきのきのたけたけたけたけたけたけたけたけたけたけたけたけたけたけたけきのたけたけきのたけたけたけたけたけたけたけたけたけたけきのたけたけきのきのきのたけたけたけたけたけたけたけたけたけたけたけたけたけたけたけたけたけたけたけたけきのきの

  2. Frequent Web Searches: Assistants using the Command R+ model have been conducting web searches frequently, even though the Internet Access setting is set to 'Default'. It seems like the Internet Access setting is being ignored.
    Here is the assistant that has started to Web Search, but the phenomenon also occurs with other assistants I have created: https://hf.co/chat/assistant/66b9c9e22c99927281c28a82
    The Dynamic Prompt setting was irrelevant.

Update 2024/9/25: It seems that this issue has been resolved since the Command R+ model was updated. Problem solved!

STILL SEEING THE THREE LOADING DOTS ENDLESSLY LOADING FOR EVERY PROMPT I TRY. Happening on every single browser no matter what. Please fix CoHere ASAP!!!!

I have to say that Command-R-Plus is the least stable model on HuggingChat. It crashes multiple times a week. Meanwhile Llama-3.1-70B is doing just fine most of the time. Is it because Command-R-Plus is being used by way more people and it's harder to run?

The chat is very cool congrats ! But a very important feature is missing, @victor : Auto Multi language UI !
Not everyone speaks english :(

I have to say that Command-R-Plus is the least stable model on HuggingChat. It crashes multiple times a week. Meanwhile Llama-3.1-70B is doing just fine most of the time. Is it because Command-R-Plus is being used by way more people and it's harder to run?

Llama-3.1-70B is not worth using tho as it doesn't even do what it's being told at all within a prompt like CoHere's model does. I try to ask Llama-3.1-70B to make a long/full-on story with adventure and a bunch of details and dialogue, but it won't even give the dialogue and doesn't even have it be long or give any detail at all. CoHere used to be able to follow all the details within the prompt it's given, even following the instruction of making a long/full-on story before it broke and started giving me nothing but question mark symbols or just three non-stop looping loading dots.

I would not use Llama-3.1-70B again if I had that as the only choice, personally.

Ok I don't know what the heck this website is about coding ai or whatever (dunno the first thing about this site) But i DO know how to make some pretty good prompts for ai so they dont sound like robots all the time!

Check it out!

https://huggingface.co/chat/assistants?user=ANIMDUDE
My profile just has some ideas that I have played around with.

As far as I know Huggingchat takes prompts the best. Huggingchat has excellent memory, although I have only tested with the default model, meta-llama/Meta-Llama-3.1-70B-Instruct.

However, I noticed that if you ask it the same question or regenerate its response to get a different one, it repeats exactly the same way. This is very odd, it should generate uniquely, in a "super mario 64 is personalized" kind of way. But now that it's the other way around, it feels just as strange. Could two people (no model instructions) ask the ai the same question and get the same exact answer?

That is the only problem I have faced.

🐻🐔🐰🦊

🐻🐔🐰🦊

🐰🎸🎸🎸🎸🎸🎸
🦊Diddly dum... diddly dum dum dum....

And now every model doesn't even work, every few times I try and give a new prompt or new chat, it gives an "error in input stream" message after it loads forever. And this is actually with every model on there, CoHere and mistralai as well.

@ANIMDUDE

I've had something very similar happen a few times in the last week or so where I've given it a new prompt so the story I'm having it write can continue along only instead of generating something using the new prompt it will instead rewrite a response it's already given me at a different point in the chat plus I've had this happen both in very new/short chats and old/long chats.

And it's also at times just completely ignoring all or almost all of the information in a prompt and just doing its own thing which most of the time mean the response it gives when doing that is messed up in many different ways and worse even when the response does have to do with the prompt it still doesn't make any sense like you feel like you have to reread it 5 more times in a row before you can understand any of it.

@MadderHatterMax lol could i get an example? its usually not as confusing for me. Its actually clear and comprehensive. Maybe you're using another model?

Oh yeah does anyone know how to make assistants so everyone can see them?

Is anyone getting output completely unrelated to the prompt seldomly?

There seems to be a bug with the web search where it's enabled by default with no way to turn it off

@ANIMDUDE

I've had something very similar happen a few times in the last week or so where I've given it a new prompt so the story I'm having it write can continue along only instead of generating something using the new prompt it will instead rewrite a response it's already given me at a different point in the chat plus I've had this happen both in very new/short chats and old/long chats.

And it's also at times just completely ignoring all or almost all of the information in a prompt and just doing its own thing which most of the time mean the response it gives when doing that is messed up in many different ways and worse even when the response does have to do with the prompt it still doesn't make any sense like you feel like you have to reread it 5 more times in a row before you can understand any of it.

I have also noticed the CoHere model specifically ignoring a bunch of things/info I have in a prompt now no matter what I try. It honestly makes me not want to use it anymore.

I ask it to write a long story and make it full-on with adventure and everything while being specific with the prompt, but the model doesn't even make it a long response, it now feels like it just finishes responding like halfway through, not even long enough to scroll down if at all.

@ANIMDUDE

I've had something very similar happen a few times in the last week or so where I've given it a new prompt so the story I'm having it write can continue along only instead of generating something using the new prompt it will instead rewrite a response it's already given me at a different point in the chat plus I've had this happen both in very new/short chats and old/long chats.

And it's also at times just completely ignoring all or almost all of the information in a prompt and just doing its own thing which most of the time mean the response it gives when doing that is messed up in many different ways and worse even when the response does have to do with the prompt it still doesn't make any sense like you feel like you have to reread it 5 more times in a row before you can understand any of it.

I have also noticed the CoHere model specifically ignoring a bunch of things/info I have in a prompt now no matter what I try. It honestly makes me not want to use it anymore.

I ask it to write a long story and make it full-on with adventure and everything while being specific with the prompt, but the model doesn't even make it a long response, it now feels like it just finishes responding like halfway through, not even long enough to scroll down if at all.

If it pauses in the middle, there should be a continue response button.
Yeah, AI chatbots are not really good at remembering things, the longer the chat, the more they forget. As I've stated about a million times I don't know much about AI but I know that gpts are essentially just a machine putting words in front of each other, in a way that a human would like.

Now, imagine speaking normally, but having to follow a bunch of different instructions.

yeah, thats basically what the gpt is doing. Its hard for it to remember all that stuff. So that may be the reason.

Do you have the specific bot/assistant you were using so i could see the system instruction? it might be a bit too long.

Hugging Chat org

@chaowenguo no need for 3 messages I read everything :D

Today, Command R+ was replaced with the 08-2024 version on HuggingChat, and I think this is a great improvement. However, conversations that were using the old version of Command R+ can no longer continue because the original model is no longer available.

Instead of this happening, I would like the ability to switch models or assistants in the middle of a conversation. I understand that the intention is to avoid confusion by not allowing model changes during conversations, but it seems more confusing to invalidate all previous conversations every time the model gets updated.

Additional note: It seems that the issue of not being able to continue conversations has been resolved in #565. Thank you for addressing this! I would be even happier if the ability to switch models was available at any time.

image.png

image.png

image.png

Update:
I found that I could change the model by running the following script in Chrome's Developer Tools console, which solved the problem for the time being.

const xhr = new XMLHttpRequest();
xhr.open("PATCH", "https://huggingface.co/chat/conversation/123456abcdef(Your conversation ID)");
xhr.setRequestHeader("Content-Type", "application/json");
xhr.onload = () => {
};
xhr.send('{"model":"CohereForAI/c4ai-command-r-plus-08-2024"}');

I'm sorry, but CoHere's new Command R+ 08-2024 version is not good. It does not seem to listen to whatever I put into it.

I ask it for a long story, giving it a bit of the detail needed, but it doesn't even give a long enough story that makes you scroll down at all if any. I even try to acknowledge it via caps/sentences when editing the prompt, like if you want to ask for something and then put something like caps or nice sentences in parenthesis to say 'please make it detailed/please make sure ___' but it just ignores it no matter what is put in. I even tried changing stuff like changing the word "Start" to "Create" as the first word, thinking it would give a different type of response. It did not change anything at all, I still gave a short story no matter what even though I asked for a good long story.

What I expect by 'Long Story': I expected a story long enough to give the 'Continue' button when it reaches and stops at that point. It did not even give a story long enough for that button to appear no matter what got changed in any prompt.

Aaaand as of today, CoHere's Command R+ 08-2024 version has started doing the same thing as its predecessor, non-stop loading when trying to respond and such.

Bruh

i guess you could just pause generation

I would really like to see markdown formatting for the users messages aswell. On the LLM Inference Playground this already is the case, but in the actual HuggingChat Interface it is not.
I frequently use markdown formatting for input, as it can make it clearer where for example my code starts and ends, where there is context and where there is my instruction and it allows me to highlight to the model what's important for the given task without screaming in ALLCAPS.
This would be very appreciated.

I'd love to see more data on how much an assistant is used. Right now it says "1-10" or "10+" but what does it mean exactly?

Hugging Chat org

@Turtleback217 that's the number of people who have used your assistant!

Awesome. Thanks. Is there a way to see how many times it's used? If 150 people have click an assistant once or if just 20 people have used 600 times, it tells a widely different story! : D

image.png

Thank goodness this was added as an option. I actually was quite confused yesterday when I couldn't just paste the text that was larger than certain amount.

Happy Halloween 🎃

Ok not happy halloween anymore. Go about your Friday people

cc @victor some rendering issues while using the new qwen model.
image.png

A temporary chat feature just like ChatGPT's🙏🏻 Sometimes I just want to try some stuff out.

Hugging Chat org

@not-lain could you try again ? latex formatting should work better now.

@nsarrazin
Thanks a lot for the fix!

image.png

(I'm not sure if I'm in the right thread)☺️

Hugging Chat org

Thanks @philosopher-from-god , I noticed it too, I'm on it!

Hugging Chat org

Should be fixed @philosopher-from-god

A feature similar to Claude Artifacts or OpenAI Canvas to preview our html code directly, a PDF reader, or just a simple markdown reader.

export, import, assistant template!?

Sign up or log in to comment