youssef boulaouane's picture

youssef boulaouane

byoussef

AI & ML interests

None yet

Recent Activity

liked a Space about 18 hours ago
Remade-AI/remade-effects
liked a Space 1 day ago
prs-eth/thera
upvoted a collection 25 days ago
SigLIP2
View all activity

Organizations

Social Post Explorers's profile picture Hugging Face Discord Community's profile picture

byoussef's activity

reacted to tianchez's post with πŸš€ 30 days ago
view post
Post
4133
Introducing VLM-R1!

GRPO has helped DeepSeek R1 to learn reasoning. Can it also help VLMs perform stronger for general computer vision tasks?

The answer is YES and it generalizes better than SFT. We trained Qwen 2.5 VL 3B on RefCOCO (a visual grounding task) and eval on RefCOCO Val and RefGTA (an OOD task).

https://github.com/om-ai-lab/VLM-R1
Β·
reacted to andrewrreed's post with πŸ”₯ 2 months ago
view post
Post
2787
πŸš€ Supercharge your LLM apps with Langfuse on Hugging Face Spaces!

Langfuse brings end-to-end observability and tooling to accelerate your dev workflow from experiments through production

Now available as a Docker Space directly on the HF Hub! πŸ€—

πŸ” Trace everything: monitor LLM calls, retrieval, and agent actions with popular frameworks
1⃣ One-click deployment: on Spaces with persistent storage and integrated OAuth
πŸ›  Simple Prompt Management: Version, edit, and update without redeployment
βœ… Intuitive Evals: Collect user feedback, run model/prompt evaluations, and improve quality
πŸ“Š Dataset Creation: Build datasets directly from production data to enhance future performance

Kudos to the Langfuse team for this collab and the awesome, open-first product they’re building! πŸ‘ @marcklingen @Clemo @MJannik

πŸ”— Space: langfuse/langfuse-template-space
πŸ”— Docs: https://huggingface.co/docs/hub/spaces-sdks-docker-langfuse
  • 1 reply
Β·
reacted to merve's post with πŸš€ 3 months ago
view post
Post
2682
small but mighty πŸ”₯
you can fine-tune SmolVLM on an L4 with batch size of 4 and it will only take 16.4 GB VRAM 🫰🏻 also with gradient accumulation simulated batch size is 16 ✨
I made a notebook that includes all the goodies: QLoRA, gradient accumulation, gradient checkpointing with explanations on how they work πŸ’ https://github.com/huggingface/smollm/blob/main/finetuning/Smol_VLM_FT.ipynb
reacted to rwightman's post with πŸš€ 4 months ago
view post
Post
1076
Want to validate some hparams or figure out what timm model to use before commiting to download or training with a large dataset? Try mini-imagenet: timm/mini-imagenet

I had this sitting on my drive and forgot where I pulled it together from. It's 100 classes of imagenet, 50k train and 10k val images (from ImageNet-1k train set), and 5k test images (from ImageNet-1k val set). 7.4GB instead of > 100GB for the full ImageNet-1k. This ver is not reduced resolution like some other 'mini' versions. Super easy to use with timm train/val scripts, checkout the dataset card.

I often check fine-tuning with even smaller datasets like:
* timm/resisc45
* timm/oxford-iiit-pet
But those are a bit small to train any modest size model w/o starting from pretrained weights.