How do your annotations for FineWeb2 compare to your teammates'?
I started contributing some annotations to the FineWeb2 collaborative annotation sprint and I wanted to know if my labelling trends were similar to those of my teammates.
I did some analysis and I wasn't surprised to see that I'm being a bit harsher on my evaluations than my mates π
Do you want to see how your annotations compare to others? π Go to this Gradio space: nataliaElv/fineweb2_compare_my_annotations βοΈ Enter the dataset that you've contributed to and your Hugging Face username.
We're so close to reaching 100 languages! Can you help us cover the remaining 200? Check if we're still looking for language leads for your language: nataliaElv/language-leads-dashboard