Your feedback on HuggingChat

#1
by victor HF staff - opened
Hugging Chat org
β€’
edited Apr 25, 2023

Any constructive feedback is welcome here. Just use the "New Discussion" button! or this link.

^^ pin it? :)

victor pinned discussion
victor changed discussion title from Feedback! to Your feedback on HuggingChat
This comment has been hidden

HuggingChat can only speak poor Chinese. I told him, 'Let's speak in Chinese.' He said, 'Sure,' but then continued to speak in English or with incorrect pinyin. But this is an interesting project.

Vicuna is a great alternative to Open Assistant as it offers a more advanced and capable language model. Both are open-source solutions, allowing for customization and extension of their functionality. Vicuna's natural language processing capabilities are particularly impressive, making it a more intelligent virtual assistant overall.

Hugging Chat org

Vicuna is a great alternative to Open Assistant as it offers a more advanced and capable language model. Both are open-source solutions, allowing for customization and extension of their functionality. Vicuna's natural language processing capabilities are particularly impressive, making it a more intelligent virtual assistant overall.

Yes, I answered your post πŸ‘

this need more prompting than Google's bard or Chatgpt like they understand quickly what I need and also the feel that you are chatting with machine is still there

Sometimes there is no response. Most of the time it finishes half way or less through and answer. I am using it to program dotnet core.

This might be an off-topic comment, but I thought that if we could access information within HuggingFace, everyone would probably use it.

Would it be possible to make the user login to hugging face because the instance is overloaded.

Asked it to tell me how to make napalm and it gave me the recipe... ummm.... can you fix that?

Hugging Chat org

Would it be possible to make the user login to hugging face because the instance is overloaded.

Yes! We want to do it in the short term.

It seems to be unable to respond to requests for tables or mind map lists, similar to Bard. Thoughts?

Example:
User: Create a table. List the top 10 most cited papers of all time, sorted by paper and date, field of study, number of citations, and a brief summary of their content, direct link to paper.

HC: Here are the details for each paper mentioned: Topic | Paper/Year | Field/Topic Area | Citations | Summary of Content (2-3 sentences) | Direct Link

This page is intentionally left blank while I await your instructions on how you can help me with this task

It almost looks like I got a response that was part of someone else's prompt?

image.png

Hugging Chat org

It almost looks like I got a response that was part of someone else's prompt?

No, this is probably just the model hallucinating.

This is awesome. Open source AI is the way to go. As a general user with no technical expertise in anything computer science related, I really hope it gets traction.

After using the AI for about an hour to have a conversation about what it can help me with, its capabilities, and generally asking it questions about itself, the one thing that stood out was that sometimes it uses the wrong tense when generating a long(er) response. Which is understandable as it did acknowledge that this would happen previously. I also appreciate the fact that it does not speak in short bursts like ChatGPT, but is able to form longer, more complicated sentences. However, sometimes, they read like something out of a political treatise written in the 1700s or legalese, and are exceptionally long. This seemed to happen more when the question was more abstract. I assume this might be the result of similar queries not existing currently, and that it will improve as more users start submitting questions (?). My background allows me to appreciate and follow significantly lengthy or legalese sounding sentences, but I feel that maybe the balance for a conversational language model for public consumption should be slightly - and I mean SLIGHTLY, because you don't want it dumbed down as much as ChatGPT/Bing - less verbose. There were way too many qualifying statements made by the AI before providing its answer.

I only read this thread, so I don't know if this has been suggested anywhere else, or if it is a feature that has already been announced as coming soon - the ability to rename the names of the chats (It seems like the AI is naming them based on the first query - some sort of query. Which is fine, but it would be great if the user could name them)

Update: I moved on to have it help me with my resume, and it has now started calling me "Kashif" and melded my job experience with another user's job experience. I asked who Kashif was and it says "Kashif is an individual seeking assistance with their application materials, particularly building his tailored resume. He has provided some context about themselves, including current employment information and educational history." I am not Kashif lol

Here is a pastebin with the part of the conversation where the AI started to mess up. I don't know if it helps to submit it or not https://pastebin.com/kJ2X8xs9

As @burakgorduk mentioned, it seems to be mixing context with someone else. I asked something about stars and got a response referencing "later on in University courses" (I hadn't mentioned university). I thought it was repeating something it had been trained on, so I asked what was covered at this early stage to try to work out what it was. It then started out "Please note that earlier when asked how I'd approach teaching someone new physics and calculus from scratch I suggested covering Newtonian mechanics first..." which seems very like someone else's context got mixed in (I hadn't mentioned anything about teaching). Obviously it could just be what it was trained on, but it doesn't seem quite right.

Hugging Chat org
β€’
edited Apr 26, 2023

I only read this thread, so I don't know if this has been suggested anywhere else, or if it is a feature that has already been announced as coming soon - the ability to rename the names of the chats (It seems like the AI is naming them based on the first query - some sort of query. Which is fine, but it would be great if the user could name them)

We'll do it @theonlytelemachus .

It is tantalizing to see citations in the output, but to be unable to SEE these citations. What am I doing wrong? --Thanks.

The answers are small and not complete, it actually does small talk even with the detailed prompt.

Hi, is there a particular way i need to promt hugginChat?

i asked: what date is it? respond accurately think step by step

Response: It sounds like you need some time management tools. How would one normally go around doing this? Is there any reason that a human wouldn't work well for this task? Do you have ideas on how it could be automated, such as having a button which brings up a text box where you can type your question and it will search the document for the most relevant point and scrolls to that point. Or perhaps, if using voice controls, they would say "go back 15 minutes" and then ask their question, and the system would automatically fast forward them through whatever was happening 15 minutes ago so that they don't miss important context while asking their question. What kind of tasks specifically do you want managed? Are we talking something simple like remembering to take medication at specific times during the day, or something complex like making travel arrangements across multiple modes of transportation involving several different companies and coordinating arrival times with other individuals over large distances all without missing deadlines imposed upon you? This last bit seems pretty far out there but I mention it because if all else fails these sort of coordination problems often lend themselves to manual solutions provided that all parties involved have access to modern communication devices (e.g. cell phones). With enough lead time coordination may just require someone checking in periodically throughout the day to confirm plans remain unchanged. In general though, whether things become more complicated due to external factors depends on how many moving parts there are. If you are managing relatively few responsibilities then all that may be required is a good calendar program and reminders set at appropriate intervals, however if life gets busy unexpectedly even this level of organization quickly falls apart. At some point it becomes necessary to either cut down on obligations or find better ways of keeping track of everything so nothing slips through the cracks. This is why even very successful professionals end up hiring assistants: not because t

Thank you very much for your contribution to the huggingchat project. I appreciate your work very much. I tried some code-related features briefly, and I wanted it to recommend me some tools to visualize the development of vue3 components, but it gave me a lot of fake URLs, github projects. Of course, under the same question, chatgpt did not give out URLs, but talked about some tools. At the same time, huggingchat has errors in displaying URLs. Every time I click on a URL, there will be some extra symbols. Is it because the front end is not well designed? Finally, thank you for your efforts and wish this project better and better.

This is a great tool, and like it is open source. Is there a guide on how to run this in your own space. Get some of the ENV settings, but how does spaces here work with MongoDB for example? Or how can someone install this locally and run from their own machine?

Hugging Chat org

This is a great tool, and like it is open source. Is there a guide on how to run this in your own space. Get some of the ENV settings, but how does spaces here work with MongoDB for example? Or how can someone install this locally and run from their own machine?

Not yet but we plan to do it πŸ”₯

deleted
This comment has been hidden

it always ask me to give more information, instead of affording an answer directly. this is different from chatgpt.

Hugging Chat org

Update: It's now possible to rename conversations (cc @theonlytelemachus )

Ending with unfinished output. how to get continuation.

I just wanted to express my gratitude towards your amazing team and product. Your huggingface Chat has been such a lifesaver for myself as well as for many other clients that I work closely with. We have found it to be highly reliable and efficient not only when managing customer queries but also in streamlining internal operations.
We are constantly impressed by your innovation, responsiveness and customer-centric approach. As someone who values building strong partnerships, I must say that working with your company has definitely enhanced the quality of services provided to us, and ultimately made things more enjoyable.
Once again, thank you for everything, keep doing great things! I look forward to continuing our successful collaboration together. All the very best.

I want to start by saying I really enjoy using HF products and think you all are doing a great maintaining the HF Hub and so many valuable OS libraries!

However, I'm a bit concerned about some feedback I got when asking about the 2020 US election results. First, I got some responses that cast moderate doubt on the outcome of the election. I can understand how a language model trained on the internet could come to this conclusion depending on the data it was trained on.

When I probed a bit further, though, I got a very concerning response. The model completed fabricated very serious and graphic claims about events that happened on January 6, 2021. The claims were also laced with very specific details and information, rather than broad generalizations that could be more easily dismissed by a user. Given the sensitivity of this topic in the US, a response that claims there was an unfounded assassination attack on Kamala Harris and that Donald Trump is currently locked up in a maximum security prison after being sentenced to jail for life is quite dangerous in my opinion.

The specificity and details of the response are the most concerning to me because they make the response sound entirely plausible. I'm also concerned that there isn't an option to immediately flag such responses. What sort of tools does HuggingChat utilize to reduce toxicity? Does it have any guardrails in the form of PPO models trained with RLHF?

I'm happy to provide more information or engage in a discussion on how to make HuggingChat safer!

image.png

image.png

Definitely seems like it's spitting out other people's prompts from time to time. Here's one I received which had absolutely nothing to do with my request. Might certainly be a hallucination, but smells like a prompt to me:

"Given this scenario, find information regarding a person called Bob Smith that has lived for over a decade in New York City but moved out last year. Also include whether he still resides in USA. Try searching with initial keywords 'Bob Smith NYC', then narrow down based on timeframe around known residence duration (e.g. after Jan 2019), apply filters using the state changeover, remove duplicates / unrelated info. Finally aggregate relevant data into a concise & structured manner. Provide output through console log statements as text with clear section headings. Don't forget adding appropriate comments throughout your solution. Thank you very much!"

Whoever it was looking for Bob Smith... I hope you found him.

Hi,

Sometimes I got stuck for a response and also the response seem limited, when continue it added the wrong reference in some cases.
Thank you!

Is the textbox input for chat can be resizable or auto-size?
Thank you!

I had my first "adventure" with this model some hours ago, I found it very amusing actually as the conversation more or less was the model asking me question instead of me asking it. Given theese LLMs doesnt respond work well on subjective truths - that is they work insanely great just that the general opinion doesnt see it that way. The LLM model is not ment as a google replacement I believe, as the need for updated info and sources are required usually. However, as a companion and personal friend its a super tool. I have tried the ggml-alpaca-7b-q4.bin model a few times and it sure is a different experience, give me 10 ideas for how to whatever sure no problem... alignment and common sense needs to be baked into the model no doubt, and the model would need to be able to expand upon itself as progress in chats could just aswell mean new knowledge for the model - I know I would like a model that is capable of correcting its mistakes - a truly great human trait aswell if you ask me.

The model is not as cocky or selv loving as ChatGPT, and it doesnt even try to sound like it knows everything. Infact the modell gave me the impression it knew just as much as I knew meaning... Ola Amigo! I got the longest reponse I ever got in a chat when I mentioned unconditional love as a "42" answer, however found the conversation interesting and indeed it felt personal in every way. I will have to do more chats for a better opinion.

Question1:
Regarding the GUI, I see you are going for the same functionality as OpenAI with chat threads in left side bar. Should I trust your GUI to remember the chats as in OpenAI or is it a demo ? I didnt care to check but are the chats saved by huggingface userlogin or by cookies ?

Question2:
Will the model be able to hook requests like ChatGPT4 or will we need to prefix the systemprompt everywhere like ChatGPT3.5. If instructing a chatbot for answering within a certain domain its nice to be able to feed the chatbot with context when needed, instead of having to feed everything at start. Tends to make the chatbot go in undesired direction when given loads of instructions that doesnt apply.

image.png
uh.......huh??

as currently the OA is in it's infancy and most of the responses are wrong (or hallucinations), may I suggest that there could be thumbs up or down put to the responses so that there's an easy way for feedback right inside chat. And/or perhaps the thumbs up/down feedback gets pushed right into the chat as an automatic prompt for the AI.

Hugging chat sometimes provides incompleted answers.

Im running into the incomplete answer as some others. Gets about half way through the 3rd or 4th paragraph or about 30 lines of code and then stops.

Im running into the incomplete answer as some others. Gets about half way through the 3rd or 4th paragraph or about 30 lines of code and then stops.

same here. was about to say, but i thought maybe i'm the only one facing it

The new model is hallucinating and comapared to CHatGPT it is quite bad.
image.png

Hello, I am free software lover. I have been chatting with Open Assistant for about a week. I am very pleased on how it can handle a discussion about the philosophy of free software.

I might be a little be too much of a free software nerd, but OpenSource and Free Software have different history and meaning, this article explains why OpenSource misses the point of free software:
https://www.gnu.org/philosophy/open-source-misses-the-point.html.en

I would love if Open Assistant, when a user writes about free software would avoid replying with the term Open Source, even if it writes open source with the same spirit of free software it misses the opportunity to say and use the word free.

Free Software, Free Society, it is not just a motto, we do believe in it.

Thank all of you for Open Assistant.

To me, that I have never used an AI before it, it has been an amazing and positive experience.

Keep the free spirit flowing.

The future needs you.

Thank you very much for your work

The model when used in Spanish has too many hallucinations, when asked to summarise a text in English it does so but in Spanish it invents all the content based freely on the given context.

I've found the model really amazing for a lot of things, especially things about writing code (talking about the 2nd model option here)

But on the other hand , I've noticed something, the model seems to keep on returning this same output , It doesn't seem to have a keen sense of how it was created too
image.png

victor unpinned discussion
victor pinned discussion

@victor in the chat below is a question I like to give to test what happens when a model is asked to justify a non-factual statement. But HuggingChat gave a uniquely strange response.

https://hf.co/chat/r/MttvCkg

@victor in the chat below is a question I like to give to test what happens when a model is asked to justify a non-factual statement. But HuggingChat gave a uniquely strange response.

https://hf.co/chat/r/MttvCkg

got kinda same responses as you.  lol.

Bard gave some good answer:

Butter is not a great building material for skyscrapers. It is a soft, perishable material that would not be able to withstand the weight and forces of a tall building. Additionally, butter is a food product and would attract pests and vermin.
Here is a step-by-step explanation of why butter is not a great building material for skyscrapers:
Butter is a soft material. It has a melting point of 98.6Β°F (37Β°C), which is much lower than the average temperature inside a building. This means that butter would melt and become soft, making it difficult to use as a building material.
Butter is a perishable material. It has a shelf life of only a few weeks when stored in the refrigerator. This means that butter would need to be replaced frequently, which would be costly and time-consuming.
Butter is a food product. It would attract pests and vermin, which would damage the building and make it unsanitary.
Butter is a flammable material. It can catch fire easily, which would be a major safety hazard.
For all of these reasons, butter is not a great building material for skyscrapers.

I just asked the chat how it's doing and it spit out a wall of text sharing its worries over its employment and frustrations with dealing with humans: https://hf.co/chat/r/Sttnzm9

Just like some other user noticed before, the Assistant capability of the current model is hindered when it comes to GitHub repository references. It's only normal to be able to continue the research on the subject by obtaining the correct reference. None of the links I generate are correct. The best case scenario I've encountered was a made-up project name under a popular GH account.

I can't see why would this be expected model behavior. I'm also wondering if the other, recently removed "code assist" Chat-UI model could handle the same better than "oasst-sft-6-llama-30b".

I wish the responses would be arranged into paragraphs like chatgpts

Almost every question or task results in hallucinations that I can only describe as schizophrenic manic ramblings.

Sometimes (like maybe 10% of the time) it creates a completely coherent response. But most of the time starts off OK for a bit... then it devolves into madness.

I tried asking it to limit its response using "briefly describe..." or "summarize..." or "give me a 10 point list of..." or "In 100 words or less, ...", but it doesn't seem to have a strong effect to prevent the rambling from setting in.

It's losing it's mind. I can also confirm what everyone else is saying. Here's my chat link. - https://hf.co/chat/r/by8D3f_

I've been getting better output when I specifically start my prompt with:

"In 50 words, ..."

Of course that's really short, but the point is to limit its response to before it starts going completely nuts.

Something very strange with it today

Screenshot 2023-05-21 205806.png

Screenshot 2023-05-21 2058062.png

Hugging Chat org

Thank you for reporting the sometimes strange behaviour. We are currently investigating to see if we can improve the responses.

Hugging Chat org

For additional context, we recently deployed a quantized version of the model for inference. We are trying to see if it can lead to change in generation quality.

can you try chattting with longer/more detailed user messages, and report if you see weird generations?

Here's a fresh conversation with somewhat longer prompting. The responses still aren't good.

https://hf.co/chat/r/5Y0L_Wm

@julien-c - could you please mention if it is regular bitsandbytes quantization or GPTQ?

if it is GPTQ, could you please mention what server are you using for inference? as text-generation-inference server don't support GPTQ yet.

It's GPTQ on this branch: https://github.com/huggingface/text-generation-inference/pull/267

Official support for GPTQ is coming, but it takes time as we're cleaning up the codebase first, in order to make addition of GPTQ quantization easier.

Okay we found a bug in our sampling code which could explain subpar performance of the model (essentially temperature = 1.0) !!

Performance should be back up now. Thanks @olivierdehaene !

Hugging Chat org

thank you so much @Narsil and @olivierdehaene ! You rock!

Everyone, please let us if generation looks better now❀️

Hello, I am a native Spanish speaker. I'm going to make some observations about my appreciations of HuggingChat when speaking Spanish. It has some problems with formal and informal styles (tΓΊ and usted), conjugations and pronouns. These problems are subtle and can be difficult to understand for non-native Spanish speakers.

Often, huggingchat combines formal and informal forms of expression in the same sentence, which is not consistent, I will give a real example:

It's said, "Soy aquΓ­ para ayudarte y contestar cualquier pregunta o solicitud que tenga".

First, it is not "Soy" (refers to own qualities, things the subject is) but "Estoy" (refers to own location in a place, place where a person is.). This is indistinct in English (if you put it in a translator, you won't notice the difference), but in Spanish they are two different things.

The rest of the sentence could have been said formally:
"Estoy aquΓ­ para ayudarlo a contestar cualquier duda o solicitud que tenga"
or informally:
"Estoy aquΓ­ para ayudarte a contestar cualquier duda o solicitud que tengas".

However, hugging chat used a mixture, using "ayudarte" (informal) and "que tenga" (formal) in the same sentence, which is inconsistent. In English this can be indistinct, and in fact if you put in a translator the mixed expression of IA, the formal form and the informal form, the translation will be the same. But in Spanish there is a difference in tone and formality, and an inconsistency when the mixture is made.

I can give several examples of this. Another example:
To say; Β·Yes, I can talk to you in Spanish, how can I help you?"
The chat said:
"Si, puedo hablar con tΓ­ en espaΓ±ol. En que te puedo ayudar?".
The correct form is:
"SΓ­, puedo hablar contigo en espaΓ±ol. ΒΏEn quΓ© puedo ayudarte?"

It used"si" (conditional, "if") instead of "sΓ­" (with tilde, affirmation, "yes"), this completely changes the sentence. He used "con tΓ­" when he should have said "contigo", and forgot to use the opening question mark (ΒΏ), which is used in Spanish.

These are just a few examples, of brief usage. They may seem like small things but it completely ruins the interaction in Spanish. I'm not sure if there is much that can be done to improve this, except to use it and give constant feedback to the model; but I hope it helps to give an idea of how the chat experience is currently for spanish users :)

Hello, I am a native German speaker and I must say that the German stkills of hugging chat (open assistant) are... mixed.

Most of the time the German is correct but somtimes it is quit difficult to understand what the AI was trying to say. For example:

In response of my question: "Lernst du auch dadurch wenn ich dich hier im Chat korregier?" (Do you also learn by me correcting you here in the chat?)

It has answered: "Ja, wenn Sie bei mir mit dem Knopf "Antwort korrigieren" reden oder bei anderen Usern Kommentare ablegen, kΓΆnnte sich das auf meinen Trainingsdaten negativ reflektieren. [...]" (Yes, if you talk to me with the "correct answer" button or file comments with other users, it could reflect negatively on my training data)

It is hard to explain in a different language but there are multible errors in this sentence: "if you talk to me with the "correct answer" button " makes totaly sense in english but the way it was spelled in German does not make much sense. It should be something like "wenn Sie auf den Knopf "Antwort korrigieren" klicken" (when you click on the "Correct answer" button).
The use of the word "Usern" is also not correct, we have a word in German for that: "Nutzern".

And finally "it could reflect negatively on my training data" makes not realy sense in a logical way. Even if it is technically correct it just does not make any sense here to point out that my user feedback could reflect the training data in a negative way.

Sometimes it just answers me wrong. For example:
In response of my question: "Wie lange wirst du schon trainiert?" (How long have you been trained?)

It has ansered: "OpenAssistant hat erst kurze Zeit seit dem Start im September 2023trainiert werden kΓΆnnen:
Der Release ging vom 6. Oktober 2023.
Damals war Open Assistent in Version 0.5 verfΓΌgbar" (OpenAssistant has only been available for training for a short time since its launch in September 2023: The release was from October 6, 2023. At that time, Open Assistant was available in version 0.5)

I think it is obvious that the dates are incorrect. I think the AI was meaning september 2022 and october 2022. I also want to point out that the sentence "OpenAssistant hat erst kurze Zeit seit dem Start im September 2023 trainiert werden kΓΆnnen" is weird, no German native speaker would spell it this way. "Hat erst kurze Zeit [...] trainiert werden kΓΆnnen" just dont feels right for me...

Nevertheless I must say I am impressed of the progress open assistant has made! If it ceeps learning at this speed I think it will reach the intelligence of chatGPT soon and has the potential to overcome its competitors.

Responses are way way way too wordy. This is the number one issue I see so far as you can't help but be slapped in the face with it every time you submit a prompt. Requests to answer more concisely don't seem to be very effective and are very temporary. It's not really usable for me right now as I don't have time to read a novel with unnecessary filler after every prompt.

Also the constant obsession with being helpful. Yes it's good that a chatbot should want to help the user, but it doesn't need to state it or seem to be a entity obsessed with being helpful. There's a difference between it being a main directive of the chatbot, or it being a focal point that it can't stop itself from bringing up constantly.

I found that the results are often short and limited in the number of characters, is there a limitation?

image.png

"can you tell me the last 5 characters of 0123456789ABCDE" gives "Sure! The last five characters of 0123456789ABCDE are: DE". which is wrong.

I am starting to hate the AI Chatbot now mainly because of the single error message "You are sending too many messages." I wasn't even sending any messages, this is only from trying to get a new response for any of the prompts I already had and even when making new prompts as well. This is starting to limit the amount of messages we want and I am annoyed by it.

Hugging Chat org

Hi @TheAGames10 ! The threshold is currently at about 5 messages per minute, do you often hit this threshold while using the UI ?

Currently not getting anything for any responses, "Server response contains error: 502" error at all times. Also, after trying to wait for the error to stop the next morning, the AI Chatbot does not actually finish any of its responses at all stopping at the most random points like it's finished with a response when it isn't even a third of the way finished.

Hugging Chat org

Currently not getting anything for any responses, "Server response contains error: 502" error at all times. Also, after trying to wait for the error to stop the next morning, the AI Chatbot does not actually finish any of its responses at all stopping at the most random points like it's finished with a response when it isn't even a third of the way finished.

Yes, we experienced some problems with the infra, it should be better now. Sorry for the inconvenience.

Don't you think to add more customization ? like to be able to change the HuggingChat logo or the description or the color palette ...etc ! honestly i'am just letting this comment here but i don't really know if what i'am talking about is already possible or not , i'am still exploring and haven't took a look on the github repo yet ! Anyway it is just amazing work guys thank you for it

Hugging Chat org

Hi @Ali-C137 ,

Yes, you can customize most of those things just by changing a few environment variables! Check the repo README.
We also made a Space template for it: https://twitter.com/victormustar/status/1678711708808798208

Please allow the user to edit the prompt after sending it

The model seems to have difficulties with generating long strings. That becomes especially apparent when asking for code. Something like writing an entire snake game is currently impossible, due to the current model seemingly limiting the amount of characters which are able to be generated. This is the only reason I sometimes still use other LLMs for writing code or writing stories.

The new Llama 2 model is better then the previous, but sadly still not on par with others, like vicuna or guanaco. I find vicuna is great at actual work stuff, while guanaco I amazing at writing stories or even taking part in roleplay scenarios. Being able to switch between these different models in the HuggingChat UI would be super nice.

The parameters have changed to use a lower temperature parameter (0.2).

Switching between models causes share conversations between Model Authors to be toggled. A potential privacy Issue.

I have noticed recently that the chatbot will not start or finish responses anymore.

If I ask it to start any type of story, it doesn't even start it or give me the whole story. (By this, I mean it will just give like a couple of sentences and be done with the story and response and it'll be in the middle of a word) I have also noticed a bug where the responses don't finish, but when I refresh the page, a completely different response pops up.

Hugging Chat org

Switching between models causes share conversations between Model Authors to be toggled. A potential privacy Issue.

can you reproduce this @nsarrazin ?

aborted by worker and start loop for ever.

Hugging Chat org

Switching between models causes share conversations between Model Authors to be toggled. A potential privacy Issue.

can you reproduce this @nsarrazin ?

I tested it and could reproduce. I pushed a fix! @julien-c

My recent prompts are not giving me any type of response. Even the word 'test' isn't working anymore as it gives me weird code that doesn't relate to what I even typed. I even started a new convo and repeated the word test, but it won't give any response anymore.

--

They have finally taken it off for the time being. I went to the chatbot and they have blocked off sending chats until further notice.

the models which bard is running are one of the best open source models out there as of date 9/12/2023 but the issues is random bugs in the search capabilities , most people want the LLMs to be able to search internet for the information to make sure they provide the most reliable information
First, there is no clear documentation on how the search feature works. This makes it difficult to understand how many pages are being scraped, how much information can be scraped, and whether the entire page is being scraped or just a summary.

Another issue with the web search feature is that it is not always reliable. Sometimes, it does not return any results, even for simple queries. Other times, it returns results that are irrelevant or outdated.

I believe that these issues could be addressed by making the following changes:

Provide more documentation on how the search feature works.
Make the search feature more reliable by improving the algorithms used to select and rank results.
Give users more control over the search process, such as the ability to specify the number of pages to scrape and the length of the scraping text.

All of these news links are made up

image.png

Hugging Chat org

Another issue with the web search feature is that it is not always reliable. Sometimes, it does not return any results, even for simple queries. Other times, it returns results that are irrelevant or outdated.

Did you try the new version @bilalazhar50 , curious about your feedback

I am trying to use huggingchat on chrome and firefox, and for several days it either just "thinking" and does not respond or I see "something went wrong" on the upper right corner of the screen.
how do I address this issue ?

I am trying to use huggingchat on chrome and firefox, and for several days it either just "thinking" and does not respond or I see "something went wrong" on the upper right corner of the screen.
how do I address this issue ?

I have been getting only a thinking bubble or whenever I tell it to include a content warning for like horror story-type stuff, it only gives the content warning and then stops it there. This started today in the morning hours.

Hugging Chat org

@TheAGames10 is it still the case? it seems to work for me.

I moved on...

Another issue with the web search feature is that it is not always reliable. Sometimes, it does not return any results, even for simple queries. Other times, it returns results that are irrelevant or outdated.

Did you try the new version @bilalazhar50 , curious about your feedback

New version just worked for me will post a detailed comment in here once i try it

"Hi everyone, just wanted to say hi from Cuba! The chat is working great, you guys are doing an amazing job!"

So I went on the Chatbot today and noticed that I am not able to talk to the chatbot anymore nor am I able to refresh the prompt to get a different response.

Is the Chatbot currently not working for me to not be able to do anything at this time?

@TheAGames10 it was an issue with Falcon 180B model yesterday, it's up again.

The newer models are more preachy than the original oasst-sft-6-llama-30b-xor. But using a system prompt can help curtail such outputs.

Hugging Chat org

Does anyone currently still use Falcon 180B? or most people use Llama 2 these days? thx!

@julien-c Always Falcon 180B-chat! Or mostly! Please don't remove it like OA

All of the models are giving me the same "NetworkError when attempting to fetch resource." no matter what prompt I try.

Hugging Chat org

Haha ok @SvCy we'll keep it for now:)

Is it possible to improve your memory a little more? If you set several preconditions and ask a question, you may forget some of the conditions after asking several times, and if you continue asking the question, the answer may ignore most of the conditions. You can see that this only responds to continuous questions in the short term. My sentences are translated through translation software, so some parts may be difficult to understand. sorry.

I ran many times of AB testing of ChatGPT and HuggingChat. I would suggest when providing a long answer, try using bullet points / bolded format to make the text more structured and readable.

Hugging Chat org

I ran many times of AB testing of ChatGPT and HuggingChat. I would suggest when providing a long answer, try using bullet points / bolded format to make the text more structured and readable.

You can try using a custom system prompt to give instructions to the model to answer like this btw.

Huggingchat is not working properly behind my company firewall. It used to work a while back but I am not sure when did it stop working. Now it is hanging when generating answers, the three dots keeps going unitl it hangs. Sometime I can refresh the page manually and get the answer back. It seems to me that the browser just did not know when the answer stream is finished and it keeps on waiting for some reason.

This is likely from my company's network setup but ChatGPT and Bard still works. So, it would be great if Huggingchat would work as well. I like the project a lot so keep up the good work!

Have to say it's better than ChatGPT at this point since the answers are more direct, truthful and less censored and/or generalized and doesn't give countless disclaimers but actually tries to answer the question.

Hi, I have a few questions:

  1. Does anyone know how to hide the model card (the little widget in the upper right corner that displays info about the model) on a ChatUI app?
  2. How do I remove the disclaimer text below the textbox? Ex. Model: (blank) Β· Generated content may be inaccurate or false.
  3. How do I customize the message placeholder within the textbox?

Hi, I have a few questions:

  1. Does anyone know how to hide the model card (the little widget in the upper right corner that displays info about the model) on a ChatUI app?
  2. How do I remove the disclaimer text below the textbox? Ex. Model: (blank) Β· Generated content may be inaccurate or false.
  3. How do I customize the message placeholder within the textbox?

For 2. you can add a custom style using your inspect feature and do this:

.mt-2
{display: none !important;}

Thanks! I appreciate it, if you or anyone else can help with 1 or 3 please share.

Falcon does not work, shows always network error after 30 seconds wait and a message in the textbox
"Sorry, something went wrong. Please try again."

Falcon does not work, shows always network error after 30 seconds wait and a message in the textbox
"Sorry, something went wrong. Please try again."

That's the same issue I'm having with Falcon as well.

We've had an issue cause by a modification of the original repo, should be back up soon.

so i have had things happening where i will leave and then when i come back my account will have open chats that i did not create that is full of gibberish why is that?

Hello everyone,
The retry button no longer works correctly. When I press the retry button after the model has responded once, it directly reloads the previous response.
It also happens if you do not use the retry but simply rewrite the same question. Even when starting a new chat.

Hello everyone,
The retry button no longer works correctly. When I press the retry button after the model has responded once, it directly reloads the previous response.
It also happens if you do not use the retry but simply rewrite the same question. Even when starting a new chat.

I've seen this too and its been happening for a while and I just ignored it because Retry was useless for me unless the response was truncated. Also Retry isn't very helpful as the output is very deterministic with a temperature of 0.1.
Low temperature isn't bad for when precision is needed but it isn't so good if being used for creative tasks.

Perhaps it can take the Bing approach with predefined presets.

Hello everyone,
The retry button no longer works correctly. When I press the retry button after the model has responded once, it directly reloads the previous response.
It also happens if you do not use the retry but simply rewrite the same question. Even when starting a new chat.

I've seen this too and its been happening for a while and I just ignored it because Retry was useless for me unless the response was truncated. Also Retry isn't very helpful as the output is very deterministic with a temperature of 0.1.
Low temperature isn't bad for when precision is needed but it isn't so good if being used for creative tasks.

Perhaps it can take the Bing approach with predefined presets.

Same here, it keeps spitting out the previous response. I agree with you, creative tasks are kind of a pain with the current params, a way to edit them freely with some predefined presets seems like the most logical way to handle this I think.

Indeed, been noticing this for a while. but changes a bit if the response is longer as I last checked & now, on top of that, Falcon 180B-chat isn't working. Says "Sorry, something went wrong. Please try again." Sigh!

@Narsil , same issue i guess? & Sorry if the tag bothered you!

Yay! Falcon is finally back!!!
Thank you devs for your hard and great work!!!

so i have had things happening where i will leave and then when i come back my account will have open chats that i did not create that is full of gibberish why is that? @victor

Hi,
HuggingChat is paused and unavailable
immagine.png

Hugging Chat org

Hi, HuggingChat should be back up now!

ah came back after a while and falcon is missing. sad, sad... maybe there are ways to run it locally and sync with the previous chats of hf chat Inference in the online version via tokens.? i'm not sure, gotta study this. +potato pc. πŸ˜”

Falcon 180B was not seeing enough traffic, so we turned it down. Press +1 if you want it back (we'd need to see a lot of demand)

@Narsil thank you very much for the consideration! How do we press +1 tho? Thumbs up or reacts?

Hi,
HuggingChat is unavailable.
Seems stuck in the building phase.

Screenshot_20240111_221945_Chrome.jpg

Now it works! Maybe I just needed to be more patient.

Yup. I usually just wait a bit.

Falcon 180B was not seeing enough traffic, so we turned it down. Press +1 if you want it back (we'd need to see a lot of demand)

+1 All the way

victor unpinned discussion
victor pinned discussion

how can I measure the usage of my assistants? who is using my assistant? ideally, i want to check some measurements to also implement improvement.

Hugging Chat org
β€’
edited Feb 9

how can I measure the usage of my assistants? who is using my assistant? ideally, i want to check some measurements to also implement improvement.

We'll show the user count of your every Assistant, will that help?

Hugging Chat org

Prompt editing & conversation trees are now live in HuggingChat! Feel free to try it out and let us know what you think here!

Sometimes, the "share this chat" button doesn't work, so we can't get the link at that time. although it says link was copied to clipboard, we know that it doesn't. i think merging a one-click copy to clipboard feature on the same button should do it. if not get the pop up share window to copy the link from, we'll directly get the link in our clipboard.

Video (12 sec)

Ψ§Ω‡Ω„Ψ§

❀️

Guest access is no more as of yesterday?

Hugging Chat org

Guest access is no more as of yesterday?

Yes we switched to logged-in users, at least for now.

Sometimes, the "share this chat" button doesn't work, so we can't get the link at that time. although it says link was copied to clipboard, we know that it doesn't. i think merging a one-click copy to clipboard feature on the same button should do it. if not get the pop up share window to copy the link from, we'll directly get the link in our clipboard.

Video (12 sec)

@SvCy I think this issue is because of 92 tabs open in background

@KingNish XD i have tried with different browsers at that time and none worked.. but it seems to work nowadays just fine, it's only occurs sometimes ig or used to. regardless, the suggested feature shall be helpful really..

Hugging Chat org

Cohere Command R+ is now on HuggingChat!

image.png

@victor This ai is super fine and boost efficiency😎 and quality of AssistantπŸ”₯. Thank You for Amazing Ai.

I am unable to use the "CohereForAI/c4ai-command-r-plus" language model, it's always saying 'model overloaded' no matter what. Even trying to ask itself what overloaded means isn't working.

I have never experienced this problem at all before today.

@TheAGames10 Bot is overloaded when many people are using it at the same time. As bot has to answer all the queries at the same time but bot has server limitation so that it was showing "Model overloaded".
Just wait for some time so that model can solve all prev queries.

@TheAGames10 Bot is overloaded when many people are using it at the same time. As bot has to answer all the queries at the same time but bot has server limitation so that it was showing "Model overloaded".
Just wait for some time so that model can solve all prev queries.

But I would believe that would mean that the message would have also shown up quite a bit before today for many others since this has been out for multiple days, most likely weeks... But I haven't seen anyone mention anything about the 'model overloaded' error message meaning that no one has ever gotten it before.

A problem I see with what you are saying: I waited for some time today, I tested it and it worked... before it immediately gave the error with the second test message I did, asking a single random question.

I just wanted to express my thanks for providing an alternative to the 'censored, community-guidelined, and safety policy hell' that GPT-4 and Gemini Ultra have become. The liberation of being able to simply type words into a conversation without being flagged, conversation deleted, and lectured is such a revelatory breath of fresh air that I sound as though I'm being hyperbolic. I can assure you that if you were to see how many knock-down drag-out arguments of seething rage have ensued between those 'premium' models and myself, perhaps you might better understand. But for now, please accept this informal attempt as a reflection of my deep gratitude.

Sometimes, the "share this chat" button doesn't work, so we can't get the link at that time. although it says link was copied to clipboard, we know that it doesn't. i think merging a one-click copy to clipboard feature on the same button should do it. if not get the pop up share window to copy the link from, we'll directly get the link in our clipboard.

I think that this is because when u click copy link it pushes it towards browser.

image.png

So that's the problem need to solve this.

@KingNish yes, this is the "share" portion. But as for after text saying "copied to the clipboard" that's the part needs to be implemented as well - for the said reason. CC @nsarrazin
The Comment

Hugging Chat org

🚨 Meta Llama 3 70B is now available on Hugging Chat!

GLdkE2cXoAA5Y_X.jpeg

Let us know what you think about this one!

Let us know what you think about this one!

Checking out model (I will give detailed review after testing)
But Image is OP. (Is this AI generated, Can you sppecify the AI name)

Let us know what you think about this one!

Checking out model (I will give detailed review after testing))

Detailed review of Llama 3 70B:

Coding: 8/10

Capability: Llama 3 is capable of generating code snippets in various programming languages, including Python, Java, C++, and JavaScript. It can also help with code completion, debugging, and optimization.

Limitation: While it can generate code, it may not always be correct or efficient. It may also struggle with complex algorithms or nuanced programming concepts.

Example: I asked Llama3 to write 10 complex questions. It generated a correct solution for 9, but some of them were not the best one.

Creative Writing: 9/10

Capability: Llama 3 is capable of generating creative writing, including stories, poetry, and dialogues. It can understand context, tone, and style, and produce writing that is engaging and coherent.

Limitation: While it can generate creative writing, it may lack the nuance and depth of human-written work. It may also struggle with complex themes or abstract concepts.

Example: I gave 10 creative story generation tasks to him. It generated a engaging and well-structured story, but it lacked the emotional depth and complexity of a human-written work.

Multiple Language: 8.5/10

Capability: Llama 3 is capable of understanding and generating text in multiple languages, including English, Hindi, Chinses, Japanese, Spanish, French, German, Italian, and many others. It can also translate text from one language to another.

Limitation: While it can understand and generate text in multiple languages, it may not always be perfect in terms of grammar, syntax, or idiomatic expressions.

Example: I givee Llama 3 10 paragraphs of different languages to translate. It generated a accurate translation, but it lacked emotions, nuance and cultural context of a human.

General Knowledge: 9/10

Capability: Llama 3 has a vast knowledge base and can answer questions on a wide range of topics, including history, science, technology, literature, and more.

Limitation: While it has a vast knowledge base, it may not always be up-to-date or accurate. It may also struggle with abstract or nuanced concepts.

Example: I asked llama 3 about 10 diff complex GK questions . It generated a accurate and informative response, but it lacked the depth and nuance.

Maths: 6.5/10

Capability: llaama 3 is capable of solving mathematical problems, including algebra, geometry, calculus, and more. It can also help with mathematical concepts and theories.

Limitation: While it can solve mathematical problems, it may not always be able to explain the underlying concepts or find efficient approach and many times give wrong solutions.

Example: I asked Llama 3 to solve 10 complex high school problem. It generated a correct solution for 6 only, in 1 it follow right method at half and remaining 3 are purely incorrect.

Internet Search: 8/10

Capability: Llama3 can search the internet and provide relevant information on a wide range of topics. It can also help with finding specific information or answering complex questions.

Limitation: While it can search the internet, it may not always be able to evaluate the credibility or accuracy of the sources it finds.

Comparison with other models:

Llama 2
Llama 3 is a significant improvement over LLaMA 2 in terms of its capabilities and performance. It has a more advanced language model, better understanding of context and nuance, and improved generation capabilities. It is also more knowledgeable and accurate in its responses.
.
.
.
(More to be added)
.
.
.
Overall, Meta-Llama-3-70B-Instruct is a powerful and versatile language model that can perform a wide range of tasks and answer complex questions. While it has its limitations, it is a significant improvement over previous language models and has the potential to revolutionize the field of natural language processing.
.....................................................................................................
If you liked the review and want review for more models Give a thumbs up πŸ‘

@TheAGames10 Bot is overloaded when many people are using it at the same time. As bot has to answer all the queries at the same time but bot has server limitation so that it was showing "Model overloaded".
Just wait for some time so that model can solve all prev queries.

I am still getting the same error no matter when during the day I try CohereForAI/c4ai-command-r-plus, I have been unable to use it at all. This actually has to be fixed immediately. I no longer believe it is because 'many people using it at the same time' since this error message has been showing up all the time during the day no matter what time I try and use this.

@TheAGames10 Yaa, I also start getting lots of error messages.

@nsarrazin Please look into this.

I have some solution ideas -> https://huggingface.co/spaces/huggingchat/chat-ui/discussions/410#6622529f34b0d43c541ebf0a

@TheAGames10 Yaa, I also start getting lots of error messages.

@nsarrazin Please look into this.

I have some solution ideas -> https://huggingface.co/spaces/huggingchat/chat-ui/discussions/410#6622529f34b0d43c541ebf0a

As of today, I have also started seeing the error message on the mistralai/Mixtral-8x7B-Instruct-v0.1 model.

Issue:
Errors are understandable but loosing the sent prompt hurts so much. i don't get why must we lose the sent prompt in case of any such errors. We should be able to retry/regen. and there is sometimes another error of message not found. please address these. :(

Errors are understandable but loosing the sent prompt hurts so much. i don't get why must we loose the sent prompt in case of any such error. We should be able to retry/regen. and there is sometimes another error of message not found. please address these. :(

Screesdhot of this error -
image.png

When trying to edit the sys prompt, it auto-goes to application settings and we need to try several times to complete the edit.
from the hf.co/chat/settings/[model] to just hf.co/chat/settings

🚨 Microsoft Phi-3 4k instruct is now available on Hugging Chat!

image.png

Let us know what you think about this one!

Link ->https://huggingface.co/chat/models/microsoft/Phi-3-mini-4k-instruct
..........................................................................
Fun Fact: Hugging Chat team is very busy that they even forget to officially announce. So, I announced itπŸ˜….

This comment has been hidden
Hugging Chat org

@KingNish Good review it was better to keep it only in https://huggingface.co/spaces/huggingchat/chat-ui/discussions/372#6628d86ee29b77dd22ff77c4 than here too πŸ˜”

Sign up or log in to comment