Sinisa Stanivuk

Stopwolf

AI & ML interests

Multilingual LLMs, STT and TTS models

Recent Activity

Organizations

Intellya Data Science Team's profile picture Data Is Better Together Contributor's profile picture

Stopwolf's activity

New activity in Stopwolf/distilhubert-gtzan 30 days ago
reacted to onekq's post with 🔥 about 1 month ago
view post
Post
4743
🐋DeepSeek 🐋 is the real OpenAI 😯
·
New activity in Stopwolf/whisper-small-sr about 1 month ago
upvoted an article about 1 month ago
view article
Article

Train 400x faster Static Embedding Models with Sentence Transformers

149
New activity in Stopwolf/whisper-tiny-minds14 about 1 month ago
New activity in stepfun-ai/GOT-OCR2_0 2 months ago

Batch inference

#38 opened 2 months ago by
Stopwolf
reacted to nataliaElv's post with 👀 3 months ago
view post
Post
1645
Would you like to get a high-quality dataset to pre-train LLMs in your language? 🌏

At Hugging Face we're preparing a collaborative annotation effort to build an open-source multilingual dataset as part of the Data is Better Together initiative.

Follow the link below, check if your language is listed and sign up to be a Language Lead!

https://forms.gle/s9nGajBh6Pb9G72J6
reacted to prithivMLmods's post with 🔥🚀 4 months ago
view post
Post
3971
I’m recently experimenting with the Flux-Ultra Realism and Real Anime LoRA models, using the Flux.1-dev model as the base. The model and its demo example are provided in the Flux LoRA DLC collections.📃

🥳Demo : 🔗 prithivMLmods/FLUX-LoRA-DLC

🥳Model:
- prithivMLmods/Canopus-LoRA-Flux-UltraRealism-2.0
- prithivMLmods/Flux-Dev-Real-Anime-LoRA

🥳For more details, please visit the README.md of the Flux LoRA DLC Space & prithivMLmods/lora-space-collections-6714b72e0d49e1c97fbd6a32
  • 1 reply
·
reacted to tomaarsen's post with 🔥 5 months ago
view post
Post
7077
📣 Sentence Transformers v3.2.0 is out, marking the biggest release for inference in 2 years! 2 new backends for embedding models: ONNX (+ optimization & quantization) and OpenVINO, allowing for speedups up to 2x-3x AND Static Embeddings for 500x speedups at 10-20% accuracy cost.

1️⃣ ONNX Backend: This backend uses the ONNX Runtime to accelerate model inference on both CPU and GPU, reaching up to 1.4x-3x speedup depending on the precision. We also introduce 2 helper methods for optimizing and quantizing models for (much) faster inference.
2️⃣ OpenVINO Backend: This backend uses Intel their OpenVINO instead, outperforming ONNX in some situations on CPU.

Usage is as simple as SentenceTransformer("all-MiniLM-L6-v2", backend="onnx"). Does your model not have an ONNX or OpenVINO file yet? No worries - it'll be autoexported for you. Thank me later 😉

🔒 Another major new feature is Static Embeddings: think word embeddings like GLoVe and word2vec, but modernized. Static Embeddings are bags of token embeddings that are summed together to create text embeddings, allowing for lightning-fast embeddings that don't require any neural networks. They're initialized in one of 2 ways:

1️⃣ via Model2Vec, a new technique for distilling any Sentence Transformer models into static embeddings. Either via a pre-distilled model with from_model2vec or with from_distillation where you do the distillation yourself. It'll only take 5 seconds on GPU & 2 minutes on CPU, no dataset needed.
2️⃣ Random initialization. This requires finetuning, but finetuning is extremely quick (e.g. I trained with 3 million pairs in 7 minutes). My final model was 6.6% worse than bge-base-en-v1.5, but 500x faster on CPU.

Full release notes: https://github.com/UKPLab/sentence-transformers/releases/tag/v3.2.0
Documentation on Speeding up Inference: https://sbert.net/docs/sentence_transformer/usage/efficiency.html
  • 1 reply
·
reacted to alielfilali01's post with 👍 5 months ago
view post
Post
2582
Don't you think we should add a tag "Evaluation" for datasets that are meant to be benchmarks and not for training ?

At least, when someone is collecting a group of datasets from an organization or let's say the whole hub can filter based on that tag and avoid somehow contaminating their "training" data.