Natalia Elvira

nataliaElv

AI & ML interests

Data curation, high-quality data, multilinguality, NLP & computational linguistics

Recent Activity

Articles

Organizations

Hugging Face's profile picture SomosNLP's profile picture Argilla's profile picture Blog-explorers's profile picture Argilla Explorers's profile picture Data Is Better Together's profile picture HuggingFaceFW-Dev's profile picture Hugging Face Discord Community's profile picture argilla-internal-testing's profile picture Argilla Warehouse's profile picture Dataset Tools's profile picture Data Is Better Together Contributor's profile picture Bluesky Community's profile picture

Posts 4

view post
Post
8
How do your annotations for FineWeb2 compare to your teammates'?

I started contributing some annotations to the FineWeb2 collaborative annotation sprint and I wanted to know if my labelling trends were similar to those of my teammates.

I did some analysis and I wasn't surprised to see that I'm being a bit harsher on my evaluations than my mates πŸ˜‚


Do you want to see how your annotations compare to others?
πŸ‘‰ Go to this Gradio space: nataliaElv/fineweb2_compare_my_annotations
✍️ Enter the dataset that you've contributed to and your Hugging Face username.

How were your results?
- Contribute some annotations: data-is-better-together/fineweb-c
- Join your language channel in Rocket chat: HuggingFaceFW/discussion

models

None public yet