Increasingly, LLMs are becoming very useful for helping scale annotation tasks, i.e. labelling and filtering. When combined with the structured generation, this can be a very scalable way of doing some pre-annotation without requiring a large team of human annotators.
๐ 1M public posts from Bluesky's firehose API ๐ Includes text, metadata, and language predictions ๐ฌ Perfect to experiment with using ML for Bluesky ๐ค
Excited to see people build more open tools for a more open social media platform!
The Bluesky AT Protocol unlocks exciting possibilities: - Building custom feeds using ML - Creating dashboards for data exploration - Developing custom models for Bluesky To gather Bluesky resources on the Hub, I've created a community org: https://huggingface.co/bluesky-community
My first rather modest contribution is a dashboard that shows the number of posts every second. Drinking straight from the firehose API ๐ฐ
Yesterday, I shared a blog post on generating data for fine-tuning ColPali using the Qwen/Qwen2-VL-7B-Instruct model.
To simplify testing this approach, I created a Space that lets you generate queries from an input document page image: davanstrien/ColPali-Query-Generator
I think there is much room for improvement, but I'm excited about the potential for relatively small VLMs to create synthetic data.
ColPali is revolutionizing multimodal retrieval, but could it be even more effective with domain-specific fine-tuning?
Check out my latest blog post, where I guide you through creating a ColPali fine-tuning dataset using Qwen/Qwen2-VL-7B-Instruct to generate queries for a collection of UFO documents sourced from the Internet Archive.
The post covers: - Introduction to data for ColPali models - Using Qwen2-VL for retrieval query generation - Tips for better query generation
๐ธ I'm working on a pipeline for creating domain-specific ColPali fine-tuning datasets using a collection of UFO newsletters from the Internet Archive as a case study.
I will have a full notebook to share on Monday, but you can already take a look at the dataset here: davanstrien/ufo-ColPali
Is your summer reading list still empty? Curious if an LLM can generate a book blurb you'd enjoy and help build a KTO preference dataset at the same time?
Using the new viewer iframe support for the datasets viewer, I built a simple Space davanstrien/collection_dataset_viewer to quickly explore all the datasets inside a collection.
The collection is loaded from an environment variable, so you can duplicate this Space to create a Space for exploring datasets in another collection!
I'm developing a tool to simplify finding datasets suitable for specific tasks or libraries. Although it's still a work in progress, I've compiled a collection of datasets that likely support DPO: davanstrien/probably-dpo-datasets-667c409a557fe99a9ed39f0b
Several methods/models have recently been shared to generate synthetic data from minimal or no initial seeds, essentially creating data directly from raw text.
IMO, these approaches that rely on smaller models for synthetic data generation are quite valuable for scaling up synthetic data and democratizing access to creating domain-specific synthetic datasets.