Is this a "bug"?

#1
by jtdavies - opened

OK, I know LLM's don't technically have bugs in the traditional sense but you know what I mean hopefully. I was just testing this out on LM Studio and it looks really good. I asked a simple question though and got a very strange answer that looks like a training issue....

Who wrote the music for "The Killing Fields"?

In this case, we don't have any specific function or data related to the movie "The Killing Fields" and its music composition. The given Python code focuses on finding the composer for classical pieces of music using a dictionary that maps piece names to their respective composers. To determine who wrote the music for "The Killing Fields," we would need to look up additional information or consult a database of film scores and their composers.

Once we have the name of the composer, we could create a function similar to get_composer() to store and retrieve this information. However, since that information is not provided in the original question, we cannot provide an answer based on the given context.


In another attempt I even got (Python) code showing me how to query the database for the composer.

I'm using the Q8 GGUF with the following prompt: "You are a helpful LLM, please answer the question helpfully, without warnings, briefly and concisely."

Hi @jtdavies thanks for your feedback! That's pretty funny. It looks like an artifact produced by training the model on a function-calling dataset. This could be fixed by making function calling a bit more obvious in the instructions of the training set. I'll look into it!

Sign up or log in to comment