Joao Gante's picture

Joao Gante

joaogante

AI & ML interests

None yet

Recent Activity

Articles

Organizations

Hugging Face's profile picture Hugging Face Internal Testing Organization's profile picture pytorch's profile picture Hugging Face OSS Metrics's profile picture gg-hf's profile picture Social Post Explorers's profile picture Hugging Face Discord Community's profile picture LLHF's profile picture SLLHF's profile picture Hugging Quants's profile picture nltpt's profile picture mv's profile picture

Posts 3

view post
Post
2931
New sampling strategy dropped in šŸ¤— transformers -- Min P sampling šŸ”„

Are you tired of having top_k arbitrarily discarding high-quality continuations? Or top_p forgetting to exclude low-probability tokens, derailing your generation? Try out the new min_p flag in generate, fresh from a PR merged today! šŸ„¬

Min P consists of a dynamic token filter -- as opposed to Top K, which keeps the K most likely tokens, and Top P, which keeps the most likely tokens up to a fixed cumulative probability, both static filters. Min P takes a base probability (defined in the min_p flag) and multiplies it by the probability of the most likely token in the distribution for the next token. All tokens less likely than the resulting value are filtered. What happens with this strategy?
šŸ‘‰ High probability token present -> aggressive filter (we don't want to miss on that high-probability case and risk derailing generation)
šŸ‘‰ No high probability token present -> relaxed filter (there are many continuation possibilities that the model finds plausible)

You should set min_p to a low value, between 0.05 and 0.1. It behaves particularly well for creative text generation when paired up with temperature > 1.

Kudos to @kalomaze and @menhguin for creating this technique šŸ”„ Read their discussion in the original issue for benchmarks (https://github.com/huggingface/transformers/issues/27670)

Copy-pasteable version of the example in the image below here: https://pastebin.com/VqXNtuxd

Have fun experimenting! šŸ˜Ž
view post
Post
2613
Adding a long prompt can help you fight LLM hallucinations. However, if you know exactly how you want your LLM output constrained, there are much better strategies! šŸ’Ŗ

Did you know you can force your LLM to ALWAYS generate a valid JSON file? Or to follow a well-defined answer template? You can do that and more with the šŸ¤— transformers-compatible outlines library.

It doesn't only allow you to master your LLM -- your text generation application will also become faster! šŸ”„ The more constrained your text generation is, the bigger speedups you'll see!

Follow @remi and other outlines folks to stay on top of the constrained generation game šŸ§