topic_modelling / README.md
seanpedrickcase's picture
Updated packages. Improve hierarchy vis. Better models - mixedbread and phi3. Now option to split texts into sentences before modelling.
04a15c5
metadata
title: Topic modelling
emoji: πŸš€
colorFrom: red
colorTo: yellow
sdk: gradio
sdk_version: 4.36.1
app_file: app.py
pinned: true
license: apache-2.0

Topic modeller

Generate topics from open text in tabular data, based on BERTopic. Upload a data file (csv, xlsx, or parquet), then specify the open text column that you want to use to generate topics. Click 'Extract topics' after you have selected the minimum similar documents per topic and maximum total topics. Duplicate this space, or clone to your computer to avoid queues here!

Uses fast TF-IDF-based embeddings by default, which are fast but not very performant in terms of cluster. Change to Mixedbread large v1 model embeddings (512 dimensions, 8 bit quantisation) on the options page for topics of much higher quality, but slower processing time. If you have an embeddings .npz file previously made using this model, you can load this in at the same time to skip the first modelling step. If you have a pre-defined list of topics for zero-shot modelling, you can upload this as a csv file under 'I have my own list of topics...'. Further configuration options are available under the 'Options' tab. Topic representation with LLMs currently based on Phi-3-mini-128k-instruct-GGUF, which is quite slow on CPU, so use a GPU-enabled computer if possible, building from the requirements_gpu.txt file in the base folder.

For small datasets, consider breaking up your text into sentences under 'Clean data' -> 'Split open text...' before topic modelling.

I suggest Wikipedia mini dataset for testing the tool here, choose passages.parquet.