beam search with llama_cpp_python?

#1
by tanmaymane18 - opened

Were you able to use beam search with llama cpp python bindings? As the original model card says, it works better with num_beams=4.

Were you able to use beam search with llama cpp python bindings? As the original model card says, it works better with num_beams=4.

That's an interesting question. I suppose it is possible to use n_beam to see how it works, but I have never tested it myself: https://github.com/abetlen/llama-cpp-python/blob/a7281994d87927e42d8e636295c786057e98d8fe/llama_cpp/llama_cpp.py#L2564

That's an interesting question. I suppose it is possible to use n_beam to see how it works, but I have never tested it myself: https://github.com/abetlen/llama-cpp-python/blob/a7281994d87927e42d8e636295c786057e98d8fe/llama_cpp/llama_cpp.py#L2564

By default the llama cpp python doesn't provide any api for beam search, only the available llama.cpp apis are made available through ctypes bindings. I tried reproducing this https://github.com/ggerganov/llama.cpp/blob/master/examples/beam-search/beam-search.cpp in python but got stuck.

Interesting, did not know that! Is this something we can ask as a feature-request in the Llama.cpp for Python?

Definitely. I was thinking to implement myself. There are two ways to approach it.

  1. implement beam_search algorithm in python
  2. Use the cpp llama_beam_search api using ctypes

As of now I have tried the 2nd approach which gives GGML_ASSERT_ERROR n_tokens<=n_batch. By increasing n_batch, it works and gives some output but its garbage. I know I should be discussing this on llama_cpp_python/llama.cpp repos but still sharing the insights here for all.

This is very interesting! Thanks for sharing your progress here @tanmaymane18
Shall we also start talking to the team in llama_cpp_python? It seems like an interesting feature to have

Sign up or log in to comment