Christopher Schrรถder

cschroeder

AI & ML interests

NLP, Active Learning, Text Representations, PyTorch

Recent Activity

View all activity

Organizations

Webis Group's profile picture Webis Hugging Face Workshop's profile picture small-text's profile picture German LLM Tokenizers's profile picture Social Post Explorers's profile picture GERTuraX's profile picture Hugging Face Discord Community's profile picture ScaDS.AI German LLM's profile picture

cschroeder's activity

posted an update about 5 hours ago
view post
Post
148
๐Ÿ”ฅ ๐…๐ข๐ง๐š๐ฅ ๐‚๐š๐ฅ๐ฅ ๐š๐ง๐ ๐ƒ๐ž๐š๐๐ฅ๐ข๐ง๐ž ๐„๐ฑ๐ญ๐ž๐ง๐ฌ๐ข๐จ๐ง: Survey on Data Annotation and Active Learning

Short summary: We need your support for a web survey in which we investigate how recent advancements in natural language processing, particularly LLMs, have influenced the need for labeled data in supervised machine learning โ€” with a focus on, but not limited to, active learning. See the original post for details.

โžก๏ธ Extended Deadline: January 26th, 2025.
Please consider participating or sharing our survey! (If you have any experience with supervised learning in natural language processing, you are eligible to participate in our survey.)

Survey: https://bildungsportal.sachsen.de/umfragen/limesurvey/index.php/538271
replied to their post 13 days ago
view reply

Just a quick note: I will not again enter any ideological debates here.

First off, I think this is a non-issue regardless of which license we use. This is first and foremost a scientific study, and the dataset weโ€™re producing is more of a byproductโ€”its main purpose is to help other researchers verify our findings. It seems like there might be some misconceptions about this dataset: Think of it as a table of answer codes. It is not a text dataset and therefore not interesting or useful for LLM training (or similar).

Second, we made this decision because the survey doesnโ€™t have any funding and relies on people generously sharing their opinions (without compensation). Given the growing skepticism around data collection, we wanted to be especially careful not to discourage users from participating. Our primary goal is to conduct a study with a population as diverse as possible, and we did not want to lose potential participants who might be less inclined to give away their data without compensation.

posted an update 15 days ago
view post
Post
384
Hereโ€™s just one of the many exciting questions from our survey. If these topics resonate with you and you have experience working on supervised learning with text (i.e., supervised learning in Natural Language Processing), we warmly invite you to participate!

Survey: https://bildungsportal.sachsen.de/umfragen/limesurvey/index.php/538271
Estimated time required: 5โ€“15 minutes
Deadline for participation: January 12, 2025

โ€”

โค๏ธ Weโ€™re seeking responses from across the globe! If you know 1โ€“3 people who might qualify for this surveyโ€”particularly those in different regionsโ€”please share it with them. Weโ€™d really appreciate it!

#NLProc #ActiveLearning #ML
  • 2 replies
ยท
posted an update 26 days ago
view post
Post
360
๐Ÿ’ก๐—Ÿ๐—ผ๐—ผ๐—ธ๐—ถ๐—ป๐—ด ๐—ณ๐—ผ๐—ฟ ๐˜€๐˜‚๐—ฝ๐—ฝ๐—ผ๐—ฟ๐˜: ๐—›๐—ฎ๐˜ƒ๐—ฒ ๐˜†๐—ผ๐˜‚ ๐—ฒ๐˜ƒ๐—ฒ๐—ฟ ๐—ต๐—ฎ๐—ฑ ๐˜๐—ผ ๐—ผ๐˜ƒ๐—ฒ๐—ฟ๐—ฐ๐—ผ๐—บ๐—ฒ ๐—ฎ ๐—น๐—ฎ๐—ฐ๐—ธ ๐—ผ๐—ณ ๐—น๐—ฎ๐—ฏ๐—ฒ๐—น๐—ฒ๐—ฑ ๐—ฑ๐—ฎ๐˜๐—ฎ ๐˜๐—ผ ๐—ฑ๐—ฒ๐—ฎ๐—น ๐˜„๐—ถ๐˜๐—ต ๐—ฎ๐—ป ๐—ก๐—Ÿ๐—ฃ ๐˜๐—ฎ๐˜€๐—ธ?

Are you working on Natural Language Processing tasks and have faced the challenge of a lack of labeled data before? ๐—ช๐—ฒ ๐—ฎ๐—ฟ๐—ฒ ๐—ฐ๐˜‚๐—ฟ๐—ฟ๐—ฒ๐—ป๐˜๐—น๐˜† ๐—ฐ๐—ผ๐—ป๐—ฑ๐˜‚๐—ฐ๐˜๐—ถ๐—ป๐—ด ๐—ฎ ๐˜€๐˜‚๐—ฟ๐˜ƒ๐—ฒ๐˜† to explore the strategies used to address this bottleneck, especially in the context of recent advancements, including but not limited to large language models.

The survey is non-commercial and conducted solely for academic research purposes. The results will contribute to an open-access publication that also benefits the community.

๐Ÿ‘‰ With only 5โ€“15 minutes of your time, you would greatly help to investigate which strategies are used by the #NLP community to overcome a lack of labeled data.

โค๏ธHow you can help even more: If you know others working on supervised learning and NLP, please share this survey with themโ€”weโ€™d really appreciate it!

Survey: https://bildungsportal.sachsen.de/umfragen/limesurvey/index.php/538271
Estimated time required: 5โ€“15 minutes
Deadline for participation: January 12, 2025

#NLP #ML
posted an update about 2 months ago
view post
Post
1089
๐Ÿฃ New release: small-text v2.0.0.dev1

With small language models on the rise, the new version of small-text has been long overdue! Despite the generative AI hype, many real-world tasks still rely on supervised learningโ€”which is reliant on labeled data.

Highlights:
- Four new query strategies: Try even more combinations than before.
- Vector indices integration: HNSW and KNN indices are now available via a unified interface and can easily be used within your code.
- Simplified installation: We dropped the torchtext dependency and cleaned up a lot of interfaces.

Github: https://github.com/webis-de/small-text

๐Ÿ‘‚ Try it out for yourself! We are eager to hear your feedback.
๐Ÿ”ง Share your small-text applications and experiments in the newly added showcase section.
๐ŸŒŸ Support the project by leaving a star on the repo!

#activelearning #nlproc #machinelearning
posted an update 2 months ago
view post
Post
697
#EMNLP2024 is happening soon! Unfortunately, I will not be on site, but I will present our poster virtually on Wednesday, Nov 13 (7:45 EST / 13:45 CEST) in Virtual Poster Session 2.

In this work, we leverage self-training in an active learning loop in order to train small language models with even less data. Hope to see you there!
  • 1 reply
ยท
reacted to tomaarsen's post with ๐Ÿ”ฅ 4 months ago
view post
Post
2009
I've just shipped the Sentence Transformers v3.1.1 patch release, fixing the hard negatives mining utility for some models. This utility is extremely useful to get more performance out of your embedding training data.

โ› Hard negatives are texts that are rather similar to some anchor text (e.g. a query), but are not the correct match. They're difficult for a model to distinguish from the correct answer, often resulting in a stronger model after training.
mine_hard_negatives docs: https://sbert.net/docs/package_reference/util.html#sentence_transformers.util.mine_hard_negatives

๐Ÿ”“ Beyond that, this release removes the numpy<2 restriction from v3.1.0. This was previously required for Windows as not all third-party libraries were updated to support numpy v2. With Sentence Transformers, you can now choose v1 or v2 of numpy.

Check out the full release notes here: https://github.com/UKPLab/sentence-transformers/releases/tag/v3.1.1

I'm looking forward to releasing v3.2, I have some exciting things planned ๐Ÿš€
replied to do-me's post 4 months ago
view reply

Did not know text-splitter yet, thanks!

reacted to do-me's post with ๐Ÿ‘€ 4 months ago
view post
Post
1067
What are your favorite text chunkers/splitters?
Mine are:
- https://github.com/benbrandt/text-splitter (Rust/Python, battle-tested, Wasm version coming soon)
- https://github.com/umarbutler/semchunk (Python, really performant but some issues with huge docs)

I tried the huge Jina AI regex, but it failed for my (admittedly messy) documents, e.g. from EUR-LEX. Their free segmenter API is really cool but unfortunately times out on my huge docs (~100 pages): https://jina.ai/segmenter/

Also, I tried to write a Vanilla JS chunker with a simple, adjustable hierarchical logic (inspired from the above). I think it does a decent job for the few lines of code: https://do-me.github.io/js-text-chunker/

Happy to hear your thoughts!
  • 1 reply
ยท
upvoted an article 4 months ago
view article
Article

AI Policy @๐Ÿค—: Open ML Considerations in the EU AI Act

โ€ข 2
reacted to gaodrew's post with ๐Ÿ”ฅ 4 months ago
view post
Post
1422
We used Hugging Face Trainer to fine-tune Deberta-v3-base for Personally Identifiable Information detection, achieving 99.44% overall accuracy (98.27% Recall for PII detection).

Please try our model (Colab Quickstart available) and let us know what you think:
iiiorg/piiranha-v1-detect-personal-information
  • 3 replies
ยท