Eloi Eynard

Eyel
·

AI & ML interests

None yet

Recent Activity

Organizations

Eyel's activity

Reacted to nyuuzyou's post with 👀 about 1 month ago
view post
Post
1397
🌐 Introducing Websim.ai User Projects Dataset - nyuuzyou/websim

Dataset highlights:
- 137,452 user projects from Websim.ai, a service for creating small sites using Large Language Models (LLMs)
- Primarily in English, with potential for multilingual content in generated websites
- Each entry includes: project metadata, user information, and generated HTML content
- Contains detailed information about project revisions, site generation, and user interactions
- Data covers a wide range of user-generated website projects created through AI assistance
- Dedicated to the public domain under Creative Commons Zero (CC0) license

The dataset can be used for analyzing AI-assisted web development trends, studying user behavior in LLM-powered creative tools, and exploring the capabilities of language models in web design.
Reacted to enzostvs's post with 🔥 2 months ago
view post
Post
3426
Looking for a logo idea 👀 ?
I made a new cool space enzostvs/Logo.Ai to help you design a great logo in seconds!

Here are some examples of what you can do, feel free to share yours too! 🚀
Reacted to Cuiunbo's post with 👍 6 months ago
view post
Post
2466
Introducing GUICourse! 🎉
By leveraging extensive OCR pretraining with grounding ability, we unlock the potential of parsing-free methods for GUIAgent.
📄 Paper: ( GUICourse: From General Vision Language Models to Versatile GUI Agents (2406.11317))
🌐 Github Repo: (https://github.com/yiye3/GUICourse)
📖 Dataset: ( yiye2023/GUIAct) / ( yiye2023/GUIChat) / ( yiye2023/GUIEnv)
🎯 Model: ( RhapsodyAI/minicpm-guidance) / ( RhapsodyAI/qwen_vl_guidance)
  • 16 replies
·
Reacted to radames's post with 🔥 6 months ago
view post
Post
6127
At Google I/O 2024, we're collaborating with the Google Visual Blocks team (https://visualblocks.withgoogle.com) to release custom Hugging Face nodes. Visual Blocks for ML is a browser-based tool that allows users to create machine learning pipelines using a visual interface. We're launching nodes with Transformers.js, running models on the browser, as well as server-side nodes running Transformers pipeline tasks and LLMs using our hosted inference. With @Xenova @JasonMayes

You can learn more about it here https://huggingface.co/blog/radames/hugging-face-google-visual-blocks

Source-code for the custom nodes:
https://github.com/huggingface/visual-blocks-custom-components
Reacted to VictorSanh's post with 🔥 7 months ago
view post
Post
2769
💬🔥Releasing idefics2-8b-chatty, the chat-optimized version of Idefics2!

It is a very efficient (8B parameters) state-of-the-art VLM, has been red-teamed, and comes with a few surprises:
- 📖Paper dissecting a lot of the experimental insights we learned building Idefics2:
- 🏎️TGI integration for blazing-fast inference (you can already run it locally with < 24GB GPU memory)
- 🏆 Ranking 2nd in its category (< 10B, open weights) in the awesome Open VLM Leaderboard, and now appearing in the incredible Vision Arena

Ressources:
⏯️Playground: HuggingFaceM4/idefics2_playground
📖Paper: What matters when building vision-language models? (2405.02246)
🏋️‍♂️Model and red-teaming analysis: HuggingFaceM4/idefics2-8b-chatty
👀Ressources to get started: HuggingFaceM4/idefics2-8b-chatty
🏆Open VLM Leaderboard: opencompass/open_vlm_leaderboard
🏟️Vision arena: WildVision/vision-arena
  • 1 reply
·
Reacted to HugoLaurencon's post with ❤️ 7 months ago
view post
Post
2848
We release Idefics2-chatty, the chatbot-optimized version of Idefics2: HuggingFaceM4/idefics2-8b-chatty

Idefics2-chatty is better at following instructions and following Chain-of-Thoughts reasoning.

Moreover, we also release a paper, containing a lot of findings on how to build an efficient and performant Vision-Language Model: What matters when building vision-language models? (2405.02246)

How are you going to use the model, or what data are you going to fine-tune it on?
·