Fatih C. Akyon's picture

Fatih C. Akyon

fcakyon

AI & ML interests

multi-modal learning, video understanding

Recent Activity

Organizations

Deprem Yapay Zeka's profile picture Yapay ZekĆ¢ Araştırma Ä°nisiyatifi's profile picture Radiology-ai's profile picture OBSS's profile picture fixit's profile picture Gradio-Blocks-Party's profile picture ultralytics+'s profile picture Video Transformers's profile picture Viddexa AI's profile picture Ultralytics's profile picture

fcakyon's activity

New activity in An-619/FastSAM 18 days ago
New activity in An-619/FastSAM 20 days ago
New activity in fcakyon/timesformer-base-finetuned-k400 about 2 months ago
New activity in fcakyon/timesformer-large-finetuned-k400 about 2 months ago
New activity in microsoft/Florence-2-base about 2 months ago

confidence score

2
#24 opened about 2 months ago by
fcakyon
New activity in openbmb/MiniCPM-Llama3-V-2_5 about 2 months ago

List of All Supported Languages

#76 opened about 2 months ago by
fcakyon
New activity in WalidBouss/LeGrad about 2 months ago

šŸš© Report: Not working

#1 opened about 2 months ago by
fcakyon
liked a Space about 2 months ago
reacted to joaogante's post with šŸ¤— about 2 months ago
view post
Post
3334
New sampling strategy dropped in šŸ¤— transformers -- Min P sampling šŸ”„

Are you tired of having top_k arbitrarily discarding high-quality continuations? Or top_p forgetting to exclude low-probability tokens, derailing your generation? Try out the new min_p flag in generate, fresh from a PR merged today! šŸ„¬

Min P consists of a dynamic token filter -- as opposed to Top K, which keeps the K most likely tokens, and Top P, which keeps the most likely tokens up to a fixed cumulative probability, both static filters. Min P takes a base probability (defined in the min_p flag) and multiplies it by the probability of the most likely token in the distribution for the next token. All tokens less likely than the resulting value are filtered. What happens with this strategy?
šŸ‘‰ High probability token present -> aggressive filter (we don't want to miss on that high-probability case and risk derailing generation)
šŸ‘‰ No high probability token present -> relaxed filter (there are many continuation possibilities that the model finds plausible)

You should set min_p to a low value, between 0.05 and 0.1. It behaves particularly well for creative text generation when paired up with temperature > 1.

Kudos to @kalomaze and @menhguin for creating this technique šŸ”„ Read their discussion in the original issue for benchmarks (https://github.com/huggingface/transformers/issues/27670)

Copy-pasteable version of the example in the image below here: https://pastebin.com/VqXNtuxd

Have fun experimenting! šŸ˜Ž