Bad response

#11
by AliMc2021 - opened

Hello, I am a developer.
I am building an AI website with Python that includes a chatbot.
I have downloaded and used all three models of DialoGPT: small, medium, and large.
However, all three models provide poor responses.
With the settings do_sample=false and seed 42.
Please help me quickly, I request!
You're welcome!

AliMc2021 changed discussion status to closed
AliMc2021 changed discussion status to open

image.png
This is a response I got from this model. lol?

I am using many small models, and did not use this one.

declare -A MODELS=(
    ["/home/data1/protected/Programming/LLM/QwQ-LCoT-3B-Instruct.Q4_K_M.gguf"]=999
    ["/home/data1/protected/Programming/LLM/Mistral/quantized/Ministral-3b-instruct-Q4_K_M.gguf"]=999
    ["/home/data1/protected/Programming/LLM/Mistral/quantized/Mistral-7B-v0.3-Q4_K_M.gguf"]=20
    ["/home/data1/protected/Programming/LLM/Microsoft/quantized/Phi-3.5-mini-instruct-Q3_K_M.gguf"]=30
    ["/home/data1/protected/Programming/LLM/Microsoft/quantized/Phi-3.5-mini-instruct-Q3_K_S.gguf"]=999
    ["/home/data1/protected/Programming/LLM/Qwen/quantized/Qwen2.5-1.5B-Instruct-Q4_K_M.gguf"]=999
    ["/home/data1/protected/Programming/LLM/Dolphin/quantized/Dolphin3.0-Qwen2.5-1.5B-Q5_K_M.gguf"]=999
    ["/home/data1/protected/Programming/LLM/Dolphin/quantized/Dolphin3.0-Qwen2.5-3B-Q5_K_M.gguf"]=999
    ["/home/data1/protected/Programming/LLM/AllenAI/quantized/olmo-2-1124-7B-instruct-Q2_K.gguf"]=24
    ["/home/data1/protected/Programming/LLM/AllenAI/quantized/OLMo-2-1124-7B-Instruct-Q3_K_M.gguf"]=18
    ["/home/data1/protected/Programming/LLM/AllenAI/quantized/olmo-2-1124-7B-instruct-Q3_K_S.gguf"]=21
    ["/home/data1/protected/Programming/LLM/SmolLM/quantized/SmolLM-1.7B-Instruct-Q5_K_M.gguf"]=999
    ["/home/data1/protected/Programming/LLM/DeepSeek/quantized/DeepSeek-R1-Distill-Qwen-1.5B-Q5_K_M.gguf"]=999
    ["/home/data1/protected/Programming/LLM/Qwen/quantized/DeepSeek-R1-Distill-Qwen-7B-Q3_K_S.gguf"]=999
    ["/home/data1/protected/Programming/LLM/Qwen/quantized/DeepSeek-R1-Distill-Qwen-7B-Q3_K_M.gguf"]=25
)

Among all those models I am using on my poor GTX 1050 Ti with 4 GB, I have never got such ridiculous answers.

I have got repetitions, and totally wrong information, but not logically wrong.

I can just recommend that you switch to better model, I can highly recommend OLMo models from Allen AI as first, plus others in the list above.

Sign up or log in to comment