Arthur Zucker

ArthurZ

AI & ML interests

None yet

Recent Activity

Articles

Organizations

ArthurZ's activity

Reacted to Xenova's post with πŸ”₯ 6 days ago
view post
Post
4203
Have you tried out πŸ€— Transformers.js v3? Here are the new features:
⚑ WebGPU support (up to 100x faster than WASM)
πŸ”’ New quantization formats (dtypes)
πŸ› 120 supported architectures in total
πŸ“‚ 25 new example projects and templates
πŸ€– Over 1200 pre-converted models
🌐 Node.js (ESM + CJS), Deno, and Bun compatibility
🏑 A new home on GitHub and NPM

Get started with npm i @huggingface/transformers.

Learn more in our blog post: https://huggingface.co/blog/transformersjs-v3
  • 3 replies
Β·
Reacted to davidberenstein1957's post with πŸ‘€ 6 days ago
view post
Post
1873
For anyone who struggles with NER or information extraction with LLM.

We showed an efficient workflow for token classification including zero-shot suggestions and model fine-tuning with Argilla, GliNER, the NuMind NuExtract LLM and SpanMarker. @argilla

Video: https://youtu.be/JvLpaYgNd84?feature=shared
Notebooks and slides included to try it yourself πŸ™‚
Reacted to LukeNeumann's post with 🀯 6 days ago
view post
Post
1190
Nine years ago, I uploaded the first 8K resolution video to YouTube and I've been stockpiling 8K footage ever since: https://www.youtube.com/watch?v=sLprVF6d7Ug&t

Should @Overlaiapp release the first open-source 8K video dataset?

Could anyone even fine tune a model with this?πŸ˜…
Β·
Reacted to their post with ❀️ 6 days ago
Reacted to AkimfromParis's post with β€οΈπŸ‘ 6 days ago
view post
Post
1384
πŸ‡―πŸ‡΅ The Open Japanese LLM Leaderboard created by LLM-jp 🌸 in partnership with HuggingFace πŸ€— was released today!

Blog: https://huggingface.co/blog/leaderboard-japanese
Space: llm-jp/open-japanese-llm-leaderboard

🌍 The leaderboard is available in both Japanese and English
πŸ“š Based on the evaluation tool, llm-jp-eval with more than 20 datasets for Japanese LLMs
πŸ“Š The leaderboard showcases all the metrics for NLP experts, plus averages for NLP beginners
πŸ’» For the comfort of users, we chose a horizontal UI, and implemented it in a light and dark theme on Gradio
πŸ”¬ The radar chart provides a very interesting visualization of metrics!
🌱 We are using the Japanese research platform, MDX, so please be patient!
⚑ LLMs bigger than +70B will be evaluated soon…

How do you say β€œGPUs Go Brrr” in Japanese - > GPUγŒγƒ–γƒ³γƒ–γƒ³ο½ž! (To pronounce "GPU ga bunbun!") πŸ”₯
  • 4 replies
Β·
Reacted to monsoon-nlp's post with πŸ‘€ 6 days ago
view post
Post
1364
Great to see Tatta Bio release an embeddings version of their DNA/protein language model 🧬: tattabio/gLM2_650M_embed
Reacted to AdinaY's post with πŸ‘ 6 days ago
Reacted to jsulz's post with πŸš€ 6 days ago
view post
Post
1974
In August, the XetHub team joined Hugging Face
- https://huggingface.co/blog/xethub-joins-hf - and we’ve been rolling up our sleeves to bring the best of both worlds together. We started with a deep dive into the current state of files stored with Git LFS on the Hub.

Getting this information was no small feat. We had to:
* Analyze a complete database dump of all repositories and files stored in Git LFS across Hugging Face.
* Parse through metadata on file sizes and types to accurately map the storage breakdown across Spaces, Models, and Datasets.

You can read more about the findings (with some jaw-dropping stats + charts) here https://www.linkedin.com/feed/update/urn:li:activity:7244486280351285248
Reacted to jsulz's post with 🧠 6 days ago
view post
Post
2836
When the XetHub crew joined Hugging Face this fall, @erinys and I started brainstorming how to share our work to replace Git LFS on the Hub. Uploading and downloading large models and datasets takes precious time. That’s where our chunk-based approach comes in.

Instead of versioning files (like Git and Git LFS), we version variable-sized chunks of data. For the Hugging Face community, this means:

⏩ Only upload the chunks that changed.
πŸš€ Download just the updates, not the whole file.
🧠 We store your file as deduplicated chunks

In our benchmarks, we found that using CDC to store iterative model and dataset version led to transfer speedups of ~2x, but this isn’t just a performance boost. It’s a rethinking of how we manage models and datasets on the Hub.

We're planning on our new storage backend to the Hub in early 2025 - check out our blog to dive deeper, and let us know: how could this improve your workflows?

https://huggingface.co/blog/from-files-to-chunks
New activity in mistralai/Pixtral-Large-Instruct-2411 8 days ago

Upload transformers version

5
#3 opened 8 days ago by ArthurZ
posted an update 8 days ago
New activity in mistral-community/pixtral-12b about 1 month ago

Update model weight

8
#13 opened about 1 month ago by nguyen-brat