Kacper GΕ‚ombiewski's picture

Kacper GΕ‚ombiewski

Clausss

AI & ML interests

None yet

Recent Activity

Organizations

None yet

Clausss's activity

reacted to csabakecskemeti's post with πŸ‘ 11 days ago
upvoted an article 15 days ago
replied to nroggendorff's post 23 days ago
view reply

so what happen if storage beyond limit?

liked a Space 27 days ago
reacted to mikelabs's post with πŸ”₯ about 1 month ago
New activity in Qwen/Qwen2.5-Coder-Artifacts about 1 month ago

Upload 81 files

1
#8 opened about 1 month ago by
Jorge-Ali
reacted to fdaudens's post with ❀️ about 1 month ago
view post
Post
1884
πŸ¦‹ Hug the butterfly! You can now add your Bluesky handle to your Hugging Face profile! ✨
New activity in Qwen/Qwen2.5-Coder-Artifacts about 1 month ago

Update app.py

1
#7 opened about 1 month ago by
yesirecarolinasoto
reacted to etemiz's post with πŸ”₯ about 1 month ago
view post
Post
969
if I host in hf spaces, can I interact with the app using an API?
  • 1 reply
Β·
reacted to m-ric's post with πŸ‘€ about 1 month ago
view post
Post
2367
A non-Instruct LLM assistant is mostly useless. 🧐

Since it's mostly a model trained to complete text, when you ask it a question like "What to do during a stopover in Paris?", it can just go on and on adding more details to your question instead of answering, which would be valid to complete text from its training corpus, but not to answer questions.

➑️ So the post-training stage includes an important Instruction tuning step where you teach your model how to be useful : answer questions, be concise, be polite... RLHF is a well known technique for this.

For people interested to understand how this step works, the folks at Adaptive ML have made a great guide!

Read it here πŸ‘‰ https://www.adaptive-ml.com/post/from-zero-to-ppo
reacted to nroggendorff's post with πŸ‘€ about 2 months ago
view post
Post
2259
I still think whitespace in tokenizers are so dumb.
Congrats, you just doubled your vocab size for no reason.
  • 3 replies
Β·