Been reading about the "bigger models = better AI" narrative getting pushed back today.
@thomwolf tackled this head on at Web Summit and highlighted how important small models are (and why closed-source companies haven't pushed for this π¬). They're crushing it: today's 1B parameter models outperform last year's 10B models.
Fascinating to hear him talk about the secret sauce behind this approach.
reacted to asoria's
post with β€οΈabout 2 months ago
When you come across an interesting dataset, you often wonder: Which topics frequently appear in these documents? π€ What is this data really about? π
Topic modeling helps answer these questions by identifying recurring themes within a collection of documents. This process enables quick and efficient exploratory data analysis.
Iβve been working on an app that leverages BERTopic, a flexible framework designed for topic modeling. Its modularity makes BERTopic powerful, allowing you to switch components with your preferred algorithms. It also supports handling large datasets efficiently by merging models using the BERTopic.merge_models approach. π
π How do we make this work? Hereβs the stack weβre using:
π Data Source β‘οΈ Hugging Face datasets with DuckDB for retrieval π§ Text Embeddings β‘οΈ Sentence Transformers (all-MiniLM-L6-v2) β‘ Dimensionality Reduction β‘οΈ RAPIDS cuML UMAP for GPU-accelerated performance π Clustering β‘οΈ RAPIDS cuML HDBSCAN for fast clustering βοΈ Tokenization β‘οΈ CountVectorizer π§ Representation Tuning β‘οΈ KeyBERTInspired + Hugging Face Inference Client with Meta-Llama-3-8B-Instruct π Visualization β‘οΈ Datamapplot library Check out the space and see how you can quickly generate topics from your dataset: datasets-topics/topics-generator