Ritvik Gaur's picture
1 5

Ritvik Gaur PRO

ritvik77

AI & ML interests

Trying new things

Recent Activity

View all activity

Organizations

None yet

ritvik77's activity

reacted to their post with 👍🤯❤️🤗🚀 2 days ago
view post
Post
2136
ritvik77/ContributionChartHuggingFace
It's Ready!

One feature Hugging Face could really benefit from is a contribution heatmap — a visual dashboard to track user engagement and contributions across models, datasets, and models over the year, similar to GitHub’s contribution graph. Guess what, Clem Delangue mentioned idea about using HF API reference for it and we made it for use.

If you are a Hugging Face user add this Space in your collection and it will give you all stats about your contributions and commits nearly same as GitHub. It's still a prototype and still working on it as a product feature.
·
replied to aiqtech's post 3 days ago
view reply

Hey, there's one problem I was facing that instead of having permission for restricted spaces and models I was bypassing them as you can see in code, we can figure out together if HF can provide some API endpoint particularly for this issue.

replied to their post 4 days ago
view reply

Thanks for the info, I'll be working on it.

posted an update 4 days ago
view post
Post
2136
ritvik77/ContributionChartHuggingFace
It's Ready!

One feature Hugging Face could really benefit from is a contribution heatmap — a visual dashboard to track user engagement and contributions across models, datasets, and models over the year, similar to GitHub’s contribution graph. Guess what, Clem Delangue mentioned idea about using HF API reference for it and we made it for use.

If you are a Hugging Face user add this Space in your collection and it will give you all stats about your contributions and commits nearly same as GitHub. It's still a prototype and still working on it as a product feature.
·
reacted to burtenshaw's post with ❤️ 11 days ago
view post
Post
3460
The Hugging Face Agents Course now includes three major agent frameworks!

🔗 https://huggingface.co/agents-course

This includes LlamaIndex, LangChain, and our very own smolagents. We've worked to integrate the three frameworks in distinctive ways so that learners can reflect on when and where to use each.

This also means that you can follow the course if you're already familiar with one of these frameworks, and soak up some of the fundamental knowledge in earlier units.

Hopefully, this makes the agents course as open to as many people as possible.
  • 3 replies
·
posted an update 13 days ago
view post
Post
466
Someone remember the Wile E. Coyote from Looney Tunes Show? He did it again but now with fooling a Tesla! This shows the difference of LiDAR vs Camera.

Tesla Autopilot Fails Wile E. Coyote Test, Drives Itself Into Picture of a Road.
For Original Video: https://lnkd.in/g4Qi8fd4
replied to their post 13 days ago
view reply

Big Asset Firms and tech Giants will soon get a way to even put some price on open source for money.

reacted to their post with ❤️❤️ 13 days ago
view post
Post
2508
Big companies are now training huge AI models with tons of data and billions of parameters, and the future seems to be about quantization—making those models smaller by turning big numbers into simpler ones, like going from 32-bit to 8-bit without reducing accuracy by +/- 0.01%. There should be some standard unit of measurement for the ratio of model size reduction to accuracy lost.

What do you all thing about this ?
·
reacted to nicolay-r's post with 👍 14 days ago
view post
Post
1569
📢 With the recent release of Gemma-3, If you interested to play with textual chain-of-though, the notebook below is a wrapper over the the model (native transformers inference API) for passing the predefined schema of promps in batching mode.
https://github.com/nicolay-r/nlp-thirdgate/blob/master/tutorials/llm_gemma_3.ipynb

Limitation: schema supports texts only (for now), while gemma-3 is a text+image to text.

Model: google/gemma-3-1b-it
Provider: https://github.com/nicolay-r/nlp-thirdgate/blob/master/llm/transformers_gemma3.py
  • 1 reply
·
posted an update 14 days ago
view post
Post
2508
Big companies are now training huge AI models with tons of data and billions of parameters, and the future seems to be about quantization—making those models smaller by turning big numbers into simpler ones, like going from 32-bit to 8-bit without reducing accuracy by +/- 0.01%. There should be some standard unit of measurement for the ratio of model size reduction to accuracy lost.

What do you all thing about this ?
·